Jan 30 14:21:46.094466 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 14:21:46.094532 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:21:46.094559 kernel: BIOS-provided physical RAM map: Jan 30 14:21:46.094580 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 14:21:46.094600 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 14:21:46.094624 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 14:21:46.094680 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 30 14:21:46.094701 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 30 14:21:46.094721 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 14:21:46.094741 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 14:21:46.094761 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 30 14:21:46.094781 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 14:21:46.094801 kernel: NX (Execute Disable) protection: active Jan 30 14:21:46.094821 kernel: APIC: Static calls initialized Jan 30 14:21:46.094851 kernel: SMBIOS 3.0.0 present. Jan 30 14:21:46.094873 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 30 14:21:46.094894 kernel: Hypervisor detected: KVM Jan 30 14:21:46.094914 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 14:21:46.094935 kernel: kvm-clock: using sched offset of 3625637500 cycles Jan 30 14:21:46.094961 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 14:21:46.094983 kernel: tsc: Detected 1996.249 MHz processor Jan 30 14:21:46.095005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:21:46.095028 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:21:46.095050 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 30 14:21:46.095072 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 14:21:46.095094 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:21:46.095115 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 30 14:21:46.095136 kernel: ACPI: Early table checksum verification disabled Jan 30 14:21:46.095162 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 30 14:21:46.095184 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:46.095205 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:46.095227 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:46.095248 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 30 14:21:46.095270 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:46.095291 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:21:46.095313 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 30 14:21:46.095334 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 30 14:21:46.095360 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 30 14:21:46.095381 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 30 14:21:46.095403 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 30 14:21:46.095432 kernel: No NUMA configuration found Jan 30 14:21:46.095454 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 30 14:21:46.095476 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 30 14:21:46.095499 kernel: Zone ranges: Jan 30 14:21:46.095526 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:21:46.095548 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 14:21:46.095570 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 30 14:21:46.095592 kernel: Movable zone start for each node Jan 30 14:21:46.095614 kernel: Early memory node ranges Jan 30 14:21:46.095636 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 14:21:46.095684 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 30 14:21:46.095706 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 30 14:21:46.095733 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 30 14:21:46.095756 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:21:46.095778 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 14:21:46.095800 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 30 14:21:46.095823 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 14:21:46.095845 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 14:21:46.095868 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 14:21:46.095890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:21:46.095912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 14:21:46.095939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:21:46.095961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 14:21:46.095984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 14:21:46.096006 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:21:46.096028 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 14:21:46.096051 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 14:21:46.096073 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 30 14:21:46.096095 kernel: Booting paravirtualized kernel on KVM Jan 30 14:21:46.096118 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:21:46.096145 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 14:21:46.096168 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 14:21:46.096190 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 14:21:46.096212 kernel: pcpu-alloc: [0] 0 1 Jan 30 14:21:46.096234 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 14:21:46.096260 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:21:46.096284 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:21:46.096306 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:21:46.096333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:21:46.096356 kernel: Fallback order for Node 0: 0 Jan 30 14:21:46.096378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 30 14:21:46.096400 kernel: Policy zone: Normal Jan 30 14:21:46.096422 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:21:46.096444 kernel: software IO TLB: area num 2. Jan 30 14:21:46.096467 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 229356K reserved, 0K cma-reserved) Jan 30 14:21:46.096490 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:21:46.096516 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 14:21:46.096539 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:21:46.096560 kernel: Dynamic Preempt: voluntary Jan 30 14:21:46.096582 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:21:46.096607 kernel: rcu: RCU event tracing is enabled. Jan 30 14:21:46.096630 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:21:46.096678 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:21:46.096701 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:21:46.096723 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:21:46.096746 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:21:46.096774 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:21:46.096796 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 14:21:46.096818 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:21:46.096840 kernel: Console: colour VGA+ 80x25 Jan 30 14:21:46.096862 kernel: printk: console [tty0] enabled Jan 30 14:21:46.096884 kernel: printk: console [ttyS0] enabled Jan 30 14:21:46.096907 kernel: ACPI: Core revision 20230628 Jan 30 14:21:46.096929 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:21:46.096951 kernel: x2apic enabled Jan 30 14:21:46.096978 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 14:21:46.097000 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 14:21:46.097022 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 14:21:46.097045 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 30 14:21:46.097068 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 14:21:46.097090 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 14:21:46.097113 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:21:46.097135 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 14:21:46.097157 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:21:46.097185 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 14:21:46.097207 kernel: Speculative Store Bypass: Vulnerable Jan 30 14:21:46.097230 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 30 14:21:46.097252 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:21:46.097289 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:21:46.097317 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:21:46.097361 kernel: landlock: Up and running. Jan 30 14:21:46.097385 kernel: SELinux: Initializing. Jan 30 14:21:46.097408 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:21:46.097432 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:21:46.097455 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 30 14:21:46.097479 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:21:46.097508 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:21:46.097532 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:21:46.097556 kernel: Performance Events: AMD PMU driver. Jan 30 14:21:46.097579 kernel: ... version: 0 Jan 30 14:21:46.097606 kernel: ... bit width: 48 Jan 30 14:21:46.097629 kernel: ... generic registers: 4 Jan 30 14:21:46.097677 kernel: ... value mask: 0000ffffffffffff Jan 30 14:21:46.097701 kernel: ... max period: 00007fffffffffff Jan 30 14:21:46.097724 kernel: ... fixed-purpose events: 0 Jan 30 14:21:46.097747 kernel: ... event mask: 000000000000000f Jan 30 14:21:46.097770 kernel: signal: max sigframe size: 1440 Jan 30 14:21:46.097793 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:21:46.097817 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:21:46.097841 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:21:46.097869 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:21:46.097892 kernel: .... node #0, CPUs: #1 Jan 30 14:21:46.097916 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:21:46.097939 kernel: smpboot: Max logical packages: 2 Jan 30 14:21:46.097962 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 30 14:21:46.097986 kernel: devtmpfs: initialized Jan 30 14:21:46.098009 kernel: x86/mm: Memory block size: 128MB Jan 30 14:21:46.098032 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:21:46.098056 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:21:46.098084 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:21:46.098107 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:21:46.098130 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:21:46.098154 kernel: audit: type=2000 audit(1738246905.607:1): state=initialized audit_enabled=0 res=1 Jan 30 14:21:46.098177 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:21:46.098201 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:21:46.098224 kernel: cpuidle: using governor menu Jan 30 14:21:46.098247 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:21:46.098271 kernel: dca service started, version 1.12.1 Jan 30 14:21:46.098361 kernel: PCI: Using configuration type 1 for base access Jan 30 14:21:46.098496 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:21:46.099791 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:21:46.099819 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:21:46.099843 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:21:46.099867 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:21:46.099891 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:21:46.099914 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:21:46.099937 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:21:46.099967 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 14:21:46.099990 kernel: ACPI: Interpreter enabled Jan 30 14:21:46.100013 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 14:21:46.100037 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:21:46.100060 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:21:46.100084 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:21:46.100107 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 14:21:46.100130 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:21:46.100456 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:21:46.100752 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 14:21:46.100982 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 14:21:46.101019 kernel: acpiphp: Slot [3] registered Jan 30 14:21:46.101044 kernel: acpiphp: Slot [4] registered Jan 30 14:21:46.101067 kernel: acpiphp: Slot [5] registered Jan 30 14:21:46.101090 kernel: acpiphp: Slot [6] registered Jan 30 14:21:46.101113 kernel: acpiphp: Slot [7] registered Jan 30 14:21:46.101145 kernel: acpiphp: Slot [8] registered Jan 30 14:21:46.101169 kernel: acpiphp: Slot [9] registered Jan 30 14:21:46.101192 kernel: acpiphp: Slot [10] registered Jan 30 14:21:46.101215 kernel: acpiphp: Slot [11] registered Jan 30 14:21:46.101238 kernel: acpiphp: Slot [12] registered Jan 30 14:21:46.101261 kernel: acpiphp: Slot [13] registered Jan 30 14:21:46.101284 kernel: acpiphp: Slot [14] registered Jan 30 14:21:46.101307 kernel: acpiphp: Slot [15] registered Jan 30 14:21:46.101330 kernel: acpiphp: Slot [16] registered Jan 30 14:21:46.101377 kernel: acpiphp: Slot [17] registered Jan 30 14:21:46.101407 kernel: acpiphp: Slot [18] registered Jan 30 14:21:46.101430 kernel: acpiphp: Slot [19] registered Jan 30 14:21:46.101452 kernel: acpiphp: Slot [20] registered Jan 30 14:21:46.101475 kernel: acpiphp: Slot [21] registered Jan 30 14:21:46.101498 kernel: acpiphp: Slot [22] registered Jan 30 14:21:46.101521 kernel: acpiphp: Slot [23] registered Jan 30 14:21:46.101543 kernel: acpiphp: Slot [24] registered Jan 30 14:21:46.101566 kernel: acpiphp: Slot [25] registered Jan 30 14:21:46.101589 kernel: acpiphp: Slot [26] registered Jan 30 14:21:46.101616 kernel: acpiphp: Slot [27] registered Jan 30 14:21:46.101677 kernel: acpiphp: Slot [28] registered Jan 30 14:21:46.101703 kernel: acpiphp: Slot [29] registered Jan 30 14:21:46.101726 kernel: acpiphp: Slot [30] registered Jan 30 14:21:46.101750 kernel: acpiphp: Slot [31] registered Jan 30 14:21:46.101773 kernel: PCI host bridge to bus 0000:00 Jan 30 14:21:46.102003 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:21:46.102212 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:21:46.102430 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:21:46.102632 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 14:21:46.102879 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 30 14:21:46.103145 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:21:46.103417 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 14:21:46.104456 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 14:21:46.104785 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 14:21:46.105038 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 30 14:21:46.105270 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 14:21:46.105546 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 14:21:46.105831 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 14:21:46.106094 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 14:21:46.106361 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 14:21:46.106580 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 14:21:46.106804 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 14:21:46.107002 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 14:21:46.107178 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 14:21:46.107352 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 30 14:21:46.107527 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 30 14:21:46.107826 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 30 14:21:46.109734 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:21:46.109945 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:21:46.110121 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 30 14:21:46.110294 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 30 14:21:46.110431 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 30 14:21:46.110521 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 30 14:21:46.110618 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 14:21:46.110740 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 14:21:46.110836 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 30 14:21:46.110926 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 30 14:21:46.111023 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 14:21:46.111115 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 30 14:21:46.111204 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 30 14:21:46.111302 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 14:21:46.111398 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 30 14:21:46.111489 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 30 14:21:46.111585 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 30 14:21:46.111615 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 14:21:46.111630 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 14:21:46.113729 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 14:21:46.113742 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 14:21:46.113751 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 14:21:46.113765 kernel: iommu: Default domain type: Translated Jan 30 14:21:46.113774 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:21:46.113784 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:21:46.113793 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:21:46.113803 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 14:21:46.113812 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 30 14:21:46.113925 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 14:21:46.114019 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 14:21:46.114110 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:21:46.114128 kernel: vgaarb: loaded Jan 30 14:21:46.114138 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 14:21:46.114148 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:21:46.114157 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:21:46.114167 kernel: pnp: PnP ACPI init Jan 30 14:21:46.114258 kernel: pnp 00:03: [dma 2] Jan 30 14:21:46.114275 kernel: pnp: PnP ACPI: found 5 devices Jan 30 14:21:46.114285 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:21:46.114298 kernel: NET: Registered PF_INET protocol family Jan 30 14:21:46.114308 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:21:46.114317 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:21:46.114327 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:21:46.114336 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:21:46.114346 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:21:46.114356 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:21:46.114366 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:21:46.114375 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:21:46.114387 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:21:46.114396 kernel: NET: Registered PF_XDP protocol family Jan 30 14:21:46.114477 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:21:46.114557 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:21:46.114656 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:21:46.114742 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 30 14:21:46.114820 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 30 14:21:46.114913 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 14:21:46.115011 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 14:21:46.115025 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:21:46.115035 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 14:21:46.115045 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 30 14:21:46.115054 kernel: Initialise system trusted keyrings Jan 30 14:21:46.115064 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:21:46.115074 kernel: Key type asymmetric registered Jan 30 14:21:46.115083 kernel: Asymmetric key parser 'x509' registered Jan 30 14:21:46.115096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:21:46.115106 kernel: io scheduler mq-deadline registered Jan 30 14:21:46.115115 kernel: io scheduler kyber registered Jan 30 14:21:46.115125 kernel: io scheduler bfq registered Jan 30 14:21:46.115134 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:21:46.115144 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 14:21:46.115154 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 14:21:46.115164 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 14:21:46.115173 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 14:21:46.115183 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:21:46.115195 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:21:46.115204 kernel: random: crng init done Jan 30 14:21:46.115214 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 14:21:46.115223 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 14:21:46.115233 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 14:21:46.115324 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 14:21:46.115340 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 14:21:46.115419 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 14:21:46.115505 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T14:21:45 UTC (1738246905) Jan 30 14:21:46.115588 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 14:21:46.115602 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 14:21:46.115612 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:21:46.115622 kernel: Segment Routing with IPv6 Jan 30 14:21:46.115631 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:21:46.116360 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:21:46.116372 kernel: Key type dns_resolver registered Jan 30 14:21:46.116385 kernel: IPI shorthand broadcast: enabled Jan 30 14:21:46.116395 kernel: sched_clock: Marking stable (1009007602, 181458319)->(1223864167, -33398246) Jan 30 14:21:46.116405 kernel: registered taskstats version 1 Jan 30 14:21:46.116414 kernel: Loading compiled-in X.509 certificates Jan 30 14:21:46.116424 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 14:21:46.116433 kernel: Key type .fscrypt registered Jan 30 14:21:46.116443 kernel: Key type fscrypt-provisioning registered Jan 30 14:21:46.116452 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:21:46.116462 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:21:46.116474 kernel: ima: No architecture policies found Jan 30 14:21:46.116483 kernel: clk: Disabling unused clocks Jan 30 14:21:46.116493 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 14:21:46.116502 kernel: Write protecting the kernel read-only data: 38912k Jan 30 14:21:46.116512 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 14:21:46.116521 kernel: Run /init as init process Jan 30 14:21:46.116531 kernel: with arguments: Jan 30 14:21:46.117050 kernel: /init Jan 30 14:21:46.117060 kernel: with environment: Jan 30 14:21:46.117073 kernel: HOME=/ Jan 30 14:21:46.117082 kernel: TERM=linux Jan 30 14:21:46.117092 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:21:46.117104 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:21:46.117118 systemd[1]: Detected virtualization kvm. Jan 30 14:21:46.117129 systemd[1]: Detected architecture x86-64. Jan 30 14:21:46.117139 systemd[1]: Running in initrd. Jan 30 14:21:46.117152 systemd[1]: No hostname configured, using default hostname. Jan 30 14:21:46.117163 systemd[1]: Hostname set to . Jan 30 14:21:46.117174 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:21:46.117184 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:21:46.117194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:21:46.117205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:21:46.117216 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:21:46.117236 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:21:46.117249 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:21:46.118693 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:21:46.118718 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:21:46.118731 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:21:46.118742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:21:46.118757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:21:46.118768 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:21:46.118778 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:21:46.118789 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:21:46.118800 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:21:46.118811 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:21:46.118821 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:21:46.118833 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:21:46.118845 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:21:46.118856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:21:46.118867 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:21:46.118878 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:21:46.118888 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:21:46.118899 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:21:46.118910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:21:46.118920 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:21:46.118931 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:21:46.118944 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:21:46.118954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:21:46.118965 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:46.118976 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:21:46.118987 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:21:46.119018 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 14:21:46.119048 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:21:46.119059 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:21:46.119074 kernel: Bridge firewalling registered Jan 30 14:21:46.119085 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:21:46.119097 systemd-journald[184]: Journal started Jan 30 14:21:46.119120 systemd-journald[184]: Runtime Journal (/run/log/journal/716846461e4d4c17ad2e046b26f6e809) is 8.0M, max 78.3M, 70.3M free. Jan 30 14:21:46.068757 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 14:21:46.123906 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:21:46.112766 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 14:21:46.126598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:21:46.167039 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:21:46.167732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:46.180905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:21:46.184921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:21:46.193929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:21:46.203903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:21:46.206330 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:21:46.211083 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:21:46.229081 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:21:46.231796 dracut-cmdline[214]: dracut-dracut-053 Jan 30 14:21:46.234715 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:21:46.239042 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:21:46.239909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:21:46.251807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:21:46.284228 systemd-resolved[236]: Positive Trust Anchors: Jan 30 14:21:46.285066 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:21:46.285847 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:21:46.288618 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 30 14:21:46.289535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:21:46.291692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:21:46.309676 kernel: SCSI subsystem initialized Jan 30 14:21:46.319688 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:21:46.332713 kernel: iscsi: registered transport (tcp) Jan 30 14:21:46.355322 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:21:46.355396 kernel: QLogic iSCSI HBA Driver Jan 30 14:21:46.416514 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:21:46.423061 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:21:46.458707 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:21:46.458795 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:21:46.458820 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:21:46.503735 kernel: raid6: sse2x4 gen() 13161 MB/s Jan 30 14:21:46.521693 kernel: raid6: sse2x2 gen() 15071 MB/s Jan 30 14:21:46.540098 kernel: raid6: sse2x1 gen() 9868 MB/s Jan 30 14:21:46.540155 kernel: raid6: using algorithm sse2x2 gen() 15071 MB/s Jan 30 14:21:46.559358 kernel: raid6: .... xor() 8768 MB/s, rmw enabled Jan 30 14:21:46.559420 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 14:21:46.582712 kernel: xor: measuring software checksum speed Jan 30 14:21:46.582784 kernel: prefetch64-sse : 14397 MB/sec Jan 30 14:21:46.583728 kernel: generic_sse : 14621 MB/sec Jan 30 14:21:46.586296 kernel: xor: using function: generic_sse (14621 MB/sec) Jan 30 14:21:46.759705 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:21:46.772254 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:21:46.782811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:21:46.810358 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jan 30 14:21:46.819829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:21:46.828851 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:21:46.850283 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 30 14:21:46.890699 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:21:46.897870 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:21:46.952288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:21:46.962549 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:21:47.000157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:21:47.003173 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:21:47.004830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:21:47.009979 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:21:47.018752 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:21:47.039921 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 30 14:21:47.086384 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 30 14:21:47.086503 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:21:47.086525 kernel: GPT:17805311 != 20971519 Jan 30 14:21:47.086537 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:21:47.086556 kernel: GPT:17805311 != 20971519 Jan 30 14:21:47.086568 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:21:47.086580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:21:47.086594 kernel: libata version 3.00 loaded. Jan 30 14:21:47.086607 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 14:21:47.094791 kernel: scsi host0: ata_piix Jan 30 14:21:47.094923 kernel: scsi host1: ata_piix Jan 30 14:21:47.095073 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 30 14:21:47.095090 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 30 14:21:47.043479 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:21:47.090958 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:21:47.091085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:21:47.091785 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:21:47.093442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:21:47.093585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:47.100980 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:47.119667 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (462) Jan 30 14:21:47.111034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:21:47.134656 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (452) Jan 30 14:21:47.151618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 14:21:47.186748 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 14:21:47.187562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:21:47.194228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:21:47.198871 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 14:21:47.199425 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 14:21:47.210769 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:21:47.215784 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:21:47.223076 disk-uuid[503]: Primary Header is updated. Jan 30 14:21:47.223076 disk-uuid[503]: Secondary Entries is updated. Jan 30 14:21:47.223076 disk-uuid[503]: Secondary Header is updated. Jan 30 14:21:47.235229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:21:47.239068 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:21:48.253748 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:21:48.256002 disk-uuid[505]: The operation has completed successfully. Jan 30 14:21:48.338445 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:21:48.338624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:21:48.362806 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:21:48.385892 sh[524]: Success Jan 30 14:21:48.411736 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 30 14:21:48.503701 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:21:48.505587 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:21:48.513830 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:21:48.548998 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 14:21:48.549073 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:21:48.552554 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:21:48.558756 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:21:48.558822 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:21:48.577110 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:21:48.578195 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:21:48.586815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:21:48.591459 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:21:48.601576 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:21:48.601669 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:21:48.601715 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:21:48.610726 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:21:48.622651 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:21:48.626273 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:21:48.641914 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:21:48.652070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:21:48.708888 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:21:48.716936 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:21:48.737022 systemd-networkd[707]: lo: Link UP Jan 30 14:21:48.737031 systemd-networkd[707]: lo: Gained carrier Jan 30 14:21:48.738184 systemd-networkd[707]: Enumeration completed Jan 30 14:21:48.738732 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:21:48.739517 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:48.739521 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:21:48.740630 systemd[1]: Reached target network.target - Network. Jan 30 14:21:48.742848 systemd-networkd[707]: eth0: Link UP Jan 30 14:21:48.742852 systemd-networkd[707]: eth0: Gained carrier Jan 30 14:21:48.742860 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:21:48.758683 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.105/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 14:21:48.832419 ignition[633]: Ignition 2.20.0 Jan 30 14:21:48.832436 ignition[633]: Stage: fetch-offline Jan 30 14:21:48.834338 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:21:48.832486 ignition[633]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:48.832502 ignition[633]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:21:48.832664 ignition[633]: parsed url from cmdline: "" Jan 30 14:21:48.832674 ignition[633]: no config URL provided Jan 30 14:21:48.832683 ignition[633]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:21:48.832696 ignition[633]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:21:48.832703 ignition[633]: failed to fetch config: resource requires networking Jan 30 14:21:48.832980 ignition[633]: Ignition finished successfully Jan 30 14:21:48.843871 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:21:48.859357 ignition[716]: Ignition 2.20.0 Jan 30 14:21:48.859372 ignition[716]: Stage: fetch Jan 30 14:21:48.859609 ignition[716]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:48.859627 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:21:48.859788 ignition[716]: parsed url from cmdline: "" Jan 30 14:21:48.859795 ignition[716]: no config URL provided Jan 30 14:21:48.859802 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:21:48.859812 ignition[716]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:21:48.859920 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 14:21:48.859970 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 14:21:48.860034 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 14:21:49.192542 ignition[716]: GET result: OK Jan 30 14:21:49.192630 ignition[716]: parsing config with SHA512: e921db488a08b03210fc2a049da620ffbeb6cf7627599d4962ea6d344e7ca4d8b193e5050d74905017f45903ef21b5b85ce6330eeb150162916809765dafbe71 Jan 30 14:21:49.198576 unknown[716]: fetched base config from "system" Jan 30 14:21:49.198593 unknown[716]: fetched base config from "system" Jan 30 14:21:49.199060 ignition[716]: fetch: fetch complete Jan 30 14:21:49.198600 unknown[716]: fetched user config from "openstack" Jan 30 14:21:49.199066 ignition[716]: fetch: fetch passed Jan 30 14:21:49.201083 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:21:49.199112 ignition[716]: Ignition finished successfully Jan 30 14:21:49.210064 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:21:49.226469 ignition[723]: Ignition 2.20.0 Jan 30 14:21:49.226486 ignition[723]: Stage: kargs Jan 30 14:21:49.226753 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:49.226774 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:21:49.229553 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:21:49.228327 ignition[723]: kargs: kargs passed Jan 30 14:21:49.228388 ignition[723]: Ignition finished successfully Jan 30 14:21:49.236854 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:21:49.251849 ignition[730]: Ignition 2.20.0 Jan 30 14:21:49.252676 ignition[730]: Stage: disks Jan 30 14:21:49.252875 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:49.252892 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:21:49.253965 ignition[730]: disks: disks passed Jan 30 14:21:49.254016 ignition[730]: Ignition finished successfully Jan 30 14:21:49.257775 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:21:49.259681 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:21:49.261212 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:21:49.263351 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:21:49.265367 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:21:49.267137 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:21:49.276939 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:21:49.300634 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 14:21:49.310567 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:21:49.317906 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:21:49.448663 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 14:21:49.449556 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:21:49.451078 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:21:49.459776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:21:49.462537 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:21:49.463932 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:21:49.465863 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 14:21:49.468953 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:21:49.469007 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:21:49.480087 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (746) Jan 30 14:21:49.480137 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:21:49.483603 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:21:49.483720 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:21:49.484590 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:21:49.488886 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:21:49.504741 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:21:49.511864 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:21:49.632491 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:21:49.639869 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:21:49.647871 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:21:49.655278 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:21:49.747010 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:21:49.750744 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:21:49.752891 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:21:49.762762 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:21:49.766347 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:21:49.790396 ignition[863]: INFO : Ignition 2.20.0 Jan 30 14:21:49.790396 ignition[863]: INFO : Stage: mount Jan 30 14:21:49.792040 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:49.792040 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:21:49.792040 ignition[863]: INFO : mount: mount passed Jan 30 14:21:49.792040 ignition[863]: INFO : Ignition finished successfully Jan 30 14:21:49.791785 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:21:49.793274 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:21:49.859919 systemd-networkd[707]: eth0: Gained IPv6LL Jan 30 14:21:56.716525 coreos-metadata[748]: Jan 30 14:21:56.716 WARN failed to locate config-drive, using the metadata service API instead Jan 30 14:21:56.756850 coreos-metadata[748]: Jan 30 14:21:56.756 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 14:21:56.768674 coreos-metadata[748]: Jan 30 14:21:56.768 INFO Fetch successful Jan 30 14:21:56.770281 coreos-metadata[748]: Jan 30 14:21:56.768 INFO wrote hostname ci-4186-1-0-5-d272c7c7c0.novalocal to /sysroot/etc/hostname Jan 30 14:21:56.774244 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 14:21:56.774540 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 14:21:56.793875 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:21:56.814952 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:21:56.847768 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (880) Jan 30 14:21:56.858701 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:21:56.858779 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:21:56.858811 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:21:56.870691 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:21:56.875633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:21:56.918714 ignition[898]: INFO : Ignition 2.20.0 Jan 30 14:21:56.918714 ignition[898]: INFO : Stage: files Jan 30 14:21:56.918714 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:21:56.918714 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:21:56.926987 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:21:56.929261 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:21:56.929261 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:21:56.934014 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:21:56.936200 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:21:56.936200 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:21:56.935622 unknown[898]: wrote ssh authorized keys file for user: core Jan 30 14:21:56.942135 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:21:56.942135 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 14:21:56.998933 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:21:57.436516 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:21:57.438535 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:21:57.438535 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 14:22:02.951362 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:22:03.558261 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:22:03.558261 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:22:03.562740 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 14:22:06.586475 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 14:22:08.661394 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:22:08.661394 ignition[898]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 14:22:08.669366 ignition[898]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:22:08.669366 ignition[898]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:22:08.669366 ignition[898]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 14:22:08.669366 ignition[898]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:22:08.669366 ignition[898]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:22:08.669366 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:22:08.669366 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:22:08.669366 ignition[898]: INFO : files: files passed Jan 30 14:22:08.669366 ignition[898]: INFO : Ignition finished successfully Jan 30 14:22:08.666046 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:22:08.675911 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:22:08.679070 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:22:08.690694 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:22:08.699900 initrd-setup-root-after-ignition[926]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:22:08.699900 initrd-setup-root-after-ignition[926]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:22:08.690778 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:22:08.706496 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:22:08.706766 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:22:08.709566 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:22:08.717764 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:22:08.757390 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:22:08.757493 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:22:08.758282 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:22:08.759946 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:22:08.762156 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:22:08.770743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:22:08.781065 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:22:08.786792 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:22:08.796196 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:22:08.796871 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:22:08.798074 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:22:08.799180 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:22:08.799294 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:22:08.800482 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:22:08.801212 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:22:08.802347 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:22:08.803375 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:22:08.804391 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:22:08.805544 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:22:08.806741 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:22:08.807921 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:22:08.808978 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:22:08.810102 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:22:08.811133 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:22:08.811241 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:22:08.812418 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:22:08.813132 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:22:08.814154 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:22:08.814252 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:22:08.815270 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:22:08.815374 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:22:08.816823 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:22:08.816939 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:22:08.817582 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:22:08.817706 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:22:08.829086 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:22:08.829658 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:22:08.829819 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:22:08.832854 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:22:08.833393 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:22:08.833565 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:22:08.834316 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:22:08.834464 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:22:08.843906 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:22:08.844001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:22:08.852396 ignition[950]: INFO : Ignition 2.20.0 Jan 30 14:22:08.852396 ignition[950]: INFO : Stage: umount Jan 30 14:22:08.853712 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:22:08.853712 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 14:22:08.856784 ignition[950]: INFO : umount: umount passed Jan 30 14:22:08.856784 ignition[950]: INFO : Ignition finished successfully Jan 30 14:22:08.858736 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:22:08.858834 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:22:08.862127 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:22:08.863073 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:22:08.863112 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:22:08.864725 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:22:08.864767 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:22:08.865249 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:22:08.865297 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:22:08.865792 systemd[1]: Stopped target network.target - Network. Jan 30 14:22:08.866220 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:22:08.866260 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:22:08.868863 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:22:08.869365 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:22:08.869437 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:22:08.870467 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:22:08.871579 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:22:08.872826 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:22:08.872874 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:22:08.874367 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:22:08.874400 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:22:08.875505 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:22:08.875547 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:22:08.876739 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:22:08.876777 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:22:08.877836 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:22:08.878840 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:22:08.881794 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:22:08.881888 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:22:08.883762 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:22:08.883832 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:22:08.883861 systemd-networkd[707]: eth0: DHCPv6 lease lost Jan 30 14:22:08.886363 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:22:08.886448 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:22:08.888315 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:22:08.888362 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:22:08.895783 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:22:08.899131 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:22:08.899184 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:22:08.900327 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:22:08.900368 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:22:08.901298 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:22:08.901337 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:22:08.902551 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:22:08.909917 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:22:08.910043 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:22:08.912891 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:22:08.912978 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:22:08.915161 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:22:08.915213 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:22:08.916386 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:22:08.916415 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:22:08.917475 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:22:08.917514 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:22:08.919143 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:22:08.919182 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:22:08.920143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:22:08.920181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:22:08.928764 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:22:08.930697 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:22:08.930747 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:22:08.932719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:22:08.932762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:22:08.934626 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:22:08.934768 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:22:08.988600 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:22:08.988877 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:22:08.992345 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:22:08.994195 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:22:08.994311 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:22:09.003946 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:22:09.024898 systemd[1]: Switching root. Jan 30 14:22:09.069545 systemd-journald[184]: Journal stopped Jan 30 14:22:10.624494 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 14:22:10.624551 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:22:10.624568 kernel: SELinux: policy capability open_perms=1 Jan 30 14:22:10.624582 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:22:10.624594 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:22:10.624606 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:22:10.624617 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:22:10.624629 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:22:10.624658 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:22:10.624671 kernel: audit: type=1403 audit(1738246929.634:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:22:10.624687 systemd[1]: Successfully loaded SELinux policy in 76.407ms. Jan 30 14:22:10.624708 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.508ms. Jan 30 14:22:10.624722 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:22:10.624735 systemd[1]: Detected virtualization kvm. Jan 30 14:22:10.624748 systemd[1]: Detected architecture x86-64. Jan 30 14:22:10.624760 systemd[1]: Detected first boot. Jan 30 14:22:10.624773 systemd[1]: Hostname set to . Jan 30 14:22:10.624786 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:22:10.624799 zram_generator::config[992]: No configuration found. Jan 30 14:22:10.624814 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:22:10.624829 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:22:10.624842 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:22:10.624855 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:22:10.624868 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:22:10.624881 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:22:10.624893 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:22:10.624905 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:22:10.624918 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:22:10.624933 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:22:10.624946 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:22:10.624959 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:22:10.624974 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:22:10.624987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:22:10.625000 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:22:10.625012 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:22:10.625025 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:22:10.625040 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:22:10.625053 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:22:10.625065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:22:10.625077 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:22:10.625090 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:22:10.625107 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:22:10.625120 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:22:10.625135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:22:10.625147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:22:10.625160 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:22:10.625172 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:22:10.625185 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:22:10.625197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:22:10.625210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:22:10.625222 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:22:10.625234 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:22:10.625249 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:22:10.625273 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:22:10.625286 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:22:10.625298 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:22:10.625311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:22:10.625323 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:22:10.625336 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:22:10.625348 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:22:10.625361 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:22:10.625377 systemd[1]: Reached target machines.target - Containers. Jan 30 14:22:10.625391 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:22:10.625404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:22:10.625416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:22:10.625429 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:22:10.625441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:22:10.625454 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:22:10.625466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:22:10.625483 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:22:10.625495 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:22:10.625508 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:22:10.625520 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:22:10.625533 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:22:10.625545 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:22:10.625558 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:22:10.625570 kernel: fuse: init (API version 7.39) Jan 30 14:22:10.625582 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:22:10.625596 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:22:10.625609 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:22:10.625621 kernel: loop: module loaded Jan 30 14:22:10.625633 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:22:10.627690 systemd-journald[1081]: Collecting audit messages is disabled. Jan 30 14:22:10.627722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:22:10.627736 systemd-journald[1081]: Journal started Jan 30 14:22:10.627765 systemd-journald[1081]: Runtime Journal (/run/log/journal/716846461e4d4c17ad2e046b26f6e809) is 8.0M, max 78.3M, 70.3M free. Jan 30 14:22:10.320285 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:22:10.341875 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 14:22:10.342353 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:22:10.632672 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:22:10.632706 systemd[1]: Stopped verity-setup.service. Jan 30 14:22:10.637657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:22:10.648679 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:22:10.649318 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:22:10.650768 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:22:10.651731 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:22:10.652734 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:22:10.653333 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:22:10.655388 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:22:10.656082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:22:10.657417 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:22:10.657569 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:22:10.658866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:22:10.658976 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:22:10.662313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:22:10.662432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:22:10.663155 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:22:10.663262 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:22:10.663946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:22:10.664054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:22:10.665594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:22:10.666336 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:22:10.678228 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:22:10.684730 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:22:10.689685 kernel: ACPI: bus type drm_connector registered Jan 30 14:22:10.696039 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:22:10.700701 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:22:10.701406 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:22:10.701503 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:22:10.703889 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:22:10.711605 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:22:10.716780 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:22:10.717904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:22:10.723821 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:22:10.727798 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:22:10.728363 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:22:10.730888 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:22:10.731466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:22:10.733068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:22:10.736802 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:22:10.740128 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:22:10.741096 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:22:10.741304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:22:10.742036 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:22:10.743949 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:22:10.744759 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:22:10.763412 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:22:10.785210 systemd-journald[1081]: Time spent on flushing to /var/log/journal/716846461e4d4c17ad2e046b26f6e809 is 32.137ms for 947 entries. Jan 30 14:22:10.785210 systemd-journald[1081]: System Journal (/var/log/journal/716846461e4d4c17ad2e046b26f6e809) is 8.0M, max 584.8M, 576.8M free. Jan 30 14:22:10.860072 systemd-journald[1081]: Received client request to flush runtime journal. Jan 30 14:22:10.860112 kernel: loop0: detected capacity change from 0 to 138184 Jan 30 14:22:10.796051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:22:10.799304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:22:10.807782 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:22:10.812000 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:22:10.812612 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:22:10.820844 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:22:10.825336 udevadm[1136]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 14:22:10.863492 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:22:10.903230 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:22:10.907805 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:22:10.924270 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:22:10.938740 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:22:10.947661 kernel: loop1: detected capacity change from 0 to 8 Jan 30 14:22:10.950602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:22:10.975701 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 14:22:10.983401 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Jan 30 14:22:10.983421 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Jan 30 14:22:10.989307 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:22:11.035677 kernel: loop3: detected capacity change from 0 to 141000 Jan 30 14:22:11.132815 kernel: loop4: detected capacity change from 0 to 138184 Jan 30 14:22:11.181108 kernel: loop5: detected capacity change from 0 to 8 Jan 30 14:22:11.181181 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 14:22:11.242021 kernel: loop7: detected capacity change from 0 to 141000 Jan 30 14:22:11.291473 (sd-merge)[1151]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 14:22:11.292603 (sd-merge)[1151]: Merged extensions into '/usr'. Jan 30 14:22:11.298830 systemd[1]: Reloading requested from client PID 1123 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:22:11.298924 systemd[1]: Reloading... Jan 30 14:22:11.412563 zram_generator::config[1173]: No configuration found. Jan 30 14:22:11.600083 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:22:11.657889 systemd[1]: Reloading finished in 357 ms. Jan 30 14:22:11.696855 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:22:11.698560 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:22:11.708413 ldconfig[1118]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:22:11.713971 systemd[1]: Starting ensure-sysext.service... Jan 30 14:22:11.719951 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:22:11.726845 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:22:11.730733 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:22:11.741839 systemd[1]: Reloading requested from client PID 1233 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:22:11.741853 systemd[1]: Reloading... Jan 30 14:22:11.767323 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:22:11.768016 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:22:11.771600 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:22:11.772588 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 30 14:22:11.773005 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 30 14:22:11.781352 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:22:11.781535 systemd-tmpfiles[1234]: Skipping /boot Jan 30 14:22:11.782223 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Jan 30 14:22:11.801440 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:22:11.801452 systemd-tmpfiles[1234]: Skipping /boot Jan 30 14:22:11.833304 zram_generator::config[1264]: No configuration found. Jan 30 14:22:11.886699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1270) Jan 30 14:22:12.008691 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 14:22:12.016678 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 14:22:12.048276 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 14:22:12.066003 kernel: ACPI: button: Power Button [PWRF] Jan 30 14:22:12.070892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:22:12.110681 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:22:12.135254 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 14:22:12.135321 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 14:22:12.141083 kernel: Console: switching to colour dummy device 80x25 Jan 30 14:22:12.141136 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 14:22:12.141158 kernel: [drm] features: -context_init Jan 30 14:22:12.143128 kernel: [drm] number of scanouts: 1 Jan 30 14:22:12.143173 kernel: [drm] number of cap sets: 0 Jan 30 14:22:12.146672 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 14:22:12.150747 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 14:22:12.158583 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 14:22:12.162664 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 14:22:12.165623 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 14:22:12.165742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:22:12.169136 systemd[1]: Reloading finished in 426 ms. Jan 30 14:22:12.190823 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:22:12.198146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:22:12.242764 systemd[1]: Finished ensure-sysext.service. Jan 30 14:22:12.245742 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:22:12.250778 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 14:22:12.264954 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:22:12.266440 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:22:12.271724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:22:12.276042 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:22:12.284992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:22:12.291623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:22:12.294651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:22:12.296418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:22:12.301790 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:22:12.303772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:22:12.314816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:22:12.317819 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:22:12.321469 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:22:12.324478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:22:12.325472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:22:12.326126 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:22:12.326453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:22:12.326564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:22:12.326846 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:22:12.326955 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:22:12.327206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:22:12.327313 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:22:12.331503 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:22:12.332266 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:22:12.345599 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:22:12.348553 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:22:12.351492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:22:12.357850 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:22:12.373556 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:22:12.409776 lvm[1378]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:22:12.410711 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:22:12.420486 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:22:12.443757 augenrules[1402]: No rules Jan 30 14:22:12.447916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:22:12.448986 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:22:12.449126 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 14:22:12.457285 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:22:12.461299 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:22:12.465583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:22:12.475861 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:22:12.482693 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:22:12.495574 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:22:12.516212 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:22:12.516974 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:22:12.522594 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:22:12.528456 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:22:12.565006 systemd-resolved[1367]: Positive Trust Anchors: Jan 30 14:22:12.565382 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:22:12.565482 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:22:12.572117 systemd-resolved[1367]: Using system hostname 'ci-4186-1-0-5-d272c7c7c0.novalocal'. Jan 30 14:22:12.573378 systemd-networkd[1366]: lo: Link UP Jan 30 14:22:12.573387 systemd-networkd[1366]: lo: Gained carrier Jan 30 14:22:12.573718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:22:12.574416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:22:12.574626 systemd-networkd[1366]: Enumeration completed Jan 30 14:22:12.576118 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:22:12.576125 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:22:12.576784 systemd-networkd[1366]: eth0: Link UP Jan 30 14:22:12.576788 systemd-networkd[1366]: eth0: Gained carrier Jan 30 14:22:12.576803 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:22:12.576908 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:22:12.577459 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:22:12.577929 systemd[1]: Reached target network.target - Network. Jan 30 14:22:12.578356 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:22:12.580314 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:22:12.581483 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:22:12.583444 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:22:12.584296 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:22:12.584323 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:22:12.585213 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:22:12.586629 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:22:12.587812 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:22:12.588619 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:22:12.591083 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:22:12.593526 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:22:12.598357 systemd-networkd[1366]: eth0: DHCPv4 address 172.24.4.105/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 14:22:12.600464 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:22:12.603464 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Jan 30 14:22:12.605958 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:22:12.610821 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:22:12.612226 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:22:12.615189 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:22:12.615891 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:22:12.615918 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:22:12.621739 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:22:12.626802 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:22:12.631113 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:22:12.637783 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:22:12.645090 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:22:12.646133 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:22:13.691752 systemd-timesyncd[1368]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jan 30 14:22:13.691806 systemd-timesyncd[1368]: Initial clock synchronization to Thu 2025-01-30 14:22:13.691635 UTC. Jan 30 14:22:13.691853 systemd-resolved[1367]: Clock change detected. Flushing caches. Jan 30 14:22:13.693901 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:22:13.699812 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:22:13.706264 jq[1428]: false Jan 30 14:22:13.713726 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:22:13.724740 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:22:13.730627 extend-filesystems[1429]: Found loop4 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found loop5 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found loop6 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found loop7 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found vda Jan 30 14:22:13.730627 extend-filesystems[1429]: Found vda1 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found vda2 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found vda3 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found usr Jan 30 14:22:13.730627 extend-filesystems[1429]: Found vda4 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found vda6 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found vda7 Jan 30 14:22:13.730627 extend-filesystems[1429]: Found vda9 Jan 30 14:22:13.730627 extend-filesystems[1429]: Checking size of /dev/vda9 Jan 30 14:22:13.942339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1280) Jan 30 14:22:13.942375 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 30 14:22:13.942394 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 30 14:22:13.732356 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:22:13.753699 dbus-daemon[1426]: [system] SELinux support is enabled Jan 30 14:22:13.954881 extend-filesystems[1429]: Resized partition /dev/vda9 Jan 30 14:22:13.741319 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:22:13.968258 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:22:13.968258 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 14:22:13.968258 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 14:22:13.968258 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 30 14:22:13.743399 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:22:13.974858 update_engine[1444]: I20250130 14:22:13.794682 1444 main.cc:92] Flatcar Update Engine starting Jan 30 14:22:13.974858 update_engine[1444]: I20250130 14:22:13.808671 1444 update_check_scheduler.cc:74] Next update check in 8m0s Jan 30 14:22:13.975613 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Jan 30 14:22:13.750744 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:22:13.988652 jq[1448]: true Jan 30 14:22:13.762257 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:22:13.776947 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:22:13.989112 tar[1454]: linux-amd64/helm Jan 30 14:22:13.805315 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:22:13.989473 jq[1455]: true Jan 30 14:22:13.805485 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:22:13.806835 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:22:13.806976 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:22:13.829365 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:22:13.830641 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:22:13.860206 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:22:13.864508 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:22:13.864548 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:22:13.866005 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:22:13.867370 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:22:13.867399 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:22:13.877765 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:22:13.921052 systemd-logind[1442]: New seat seat0. Jan 30 14:22:13.933805 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 14:22:13.933827 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 14:22:13.937544 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:22:13.937748 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:22:13.945021 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:22:14.030066 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:22:14.030717 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:22:14.047755 systemd[1]: Starting sshkeys.service... Jan 30 14:22:14.074679 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:22:14.088980 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:22:14.150708 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:22:14.310762 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:22:14.347151 containerd[1456]: time="2025-01-30T14:22:14.347080685Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 14:22:14.380917 containerd[1456]: time="2025-01-30T14:22:14.380797361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:22:14.382604 containerd[1456]: time="2025-01-30T14:22:14.382331769Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:22:14.382604 containerd[1456]: time="2025-01-30T14:22:14.382377304Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:22:14.382604 containerd[1456]: time="2025-01-30T14:22:14.382399436Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:22:14.382751 containerd[1456]: time="2025-01-30T14:22:14.382611113Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:22:14.382751 containerd[1456]: time="2025-01-30T14:22:14.382633745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:22:14.382751 containerd[1456]: time="2025-01-30T14:22:14.382705840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:22:14.382751 containerd[1456]: time="2025-01-30T14:22:14.382724195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:22:14.382985 containerd[1456]: time="2025-01-30T14:22:14.382947213Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:22:14.382985 containerd[1456]: time="2025-01-30T14:22:14.382976348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:22:14.383042 containerd[1456]: time="2025-01-30T14:22:14.382993339Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:22:14.383042 containerd[1456]: time="2025-01-30T14:22:14.383005703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:22:14.383109 containerd[1456]: time="2025-01-30T14:22:14.383092756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:22:14.383353 containerd[1456]: time="2025-01-30T14:22:14.383320673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:22:14.383462 containerd[1456]: time="2025-01-30T14:22:14.383431752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:22:14.383462 containerd[1456]: time="2025-01-30T14:22:14.383454875Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:22:14.383601 containerd[1456]: time="2025-01-30T14:22:14.383555644Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:22:14.389666 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:22:14.390715 containerd[1456]: time="2025-01-30T14:22:14.390681969Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:22:14.408846 containerd[1456]: time="2025-01-30T14:22:14.408767789Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:22:14.408846 containerd[1456]: time="2025-01-30T14:22:14.408817322Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:22:14.408846 containerd[1456]: time="2025-01-30T14:22:14.408837931Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:22:14.409096 containerd[1456]: time="2025-01-30T14:22:14.408857438Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:22:14.409096 containerd[1456]: time="2025-01-30T14:22:14.408876543Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:22:14.409096 containerd[1456]: time="2025-01-30T14:22:14.409013330Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409288997Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409402099Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409421976Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409438628Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409454127Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409468564Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409482560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409497658Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409513608Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409527524Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409541200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409554305Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409593939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411614 containerd[1456]: time="2025-01-30T14:22:14.409611181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409625167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409641107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409655584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409670613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409683767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409697683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409711780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409727499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409741696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409755682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409769238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409786229Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409807269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409822147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.411939 containerd[1456]: time="2025-01-30T14:22:14.409835221Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:22:14.412260 containerd[1456]: time="2025-01-30T14:22:14.410384471Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:22:14.412260 containerd[1456]: time="2025-01-30T14:22:14.410409478Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:22:14.412260 containerd[1456]: time="2025-01-30T14:22:14.410422152Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:22:14.412260 containerd[1456]: time="2025-01-30T14:22:14.410442110Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:22:14.412260 containerd[1456]: time="2025-01-30T14:22:14.410453471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.412260 containerd[1456]: time="2025-01-30T14:22:14.410467547Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:22:14.412260 containerd[1456]: time="2025-01-30T14:22:14.410479139Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:22:14.412260 containerd[1456]: time="2025-01-30T14:22:14.410494528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:22:14.414179 containerd[1456]: time="2025-01-30T14:22:14.414110199Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:22:14.414307 containerd[1456]: time="2025-01-30T14:22:14.414179719Z" level=info msg="Connect containerd service" Jan 30 14:22:14.414307 containerd[1456]: time="2025-01-30T14:22:14.414208273Z" level=info msg="using legacy CRI server" Jan 30 14:22:14.414307 containerd[1456]: time="2025-01-30T14:22:14.414215877Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:22:14.414869 containerd[1456]: time="2025-01-30T14:22:14.414347013Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:22:14.415111 containerd[1456]: time="2025-01-30T14:22:14.415046746Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:22:14.415500 containerd[1456]: time="2025-01-30T14:22:14.415212807Z" level=info msg="Start subscribing containerd event" Jan 30 14:22:14.415500 containerd[1456]: time="2025-01-30T14:22:14.415283329Z" level=info msg="Start recovering state" Jan 30 14:22:14.415500 containerd[1456]: time="2025-01-30T14:22:14.415356747Z" level=info msg="Start event monitor" Jan 30 14:22:14.415500 containerd[1456]: time="2025-01-30T14:22:14.415369872Z" level=info msg="Start snapshots syncer" Jan 30 14:22:14.415500 containerd[1456]: time="2025-01-30T14:22:14.415381473Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:22:14.415500 containerd[1456]: time="2025-01-30T14:22:14.415389398Z" level=info msg="Start streaming server" Jan 30 14:22:14.417780 containerd[1456]: time="2025-01-30T14:22:14.417348963Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:22:14.417780 containerd[1456]: time="2025-01-30T14:22:14.417402985Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:22:14.417592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:22:14.417937 containerd[1456]: time="2025-01-30T14:22:14.417855163Z" level=info msg="containerd successfully booted in 0.072709s" Jan 30 14:22:14.422117 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:22:14.433891 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:22:14.442768 systemd[1]: Started sshd@0-172.24.4.105:22-172.24.4.1:44796.service - OpenSSH per-connection server daemon (172.24.4.1:44796). Jan 30 14:22:14.451455 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:22:14.451651 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:22:14.465859 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:22:14.493433 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:22:14.503967 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:22:14.508982 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:22:14.512683 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:22:14.707877 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 30 14:22:14.710254 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:22:14.717522 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:22:14.723609 tar[1454]: linux-amd64/LICENSE Jan 30 14:22:14.723609 tar[1454]: linux-amd64/README.md Jan 30 14:22:14.730876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:14.735059 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:22:14.752318 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:22:14.763405 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:22:15.362728 sshd[1511]: Accepted publickey for core from 172.24.4.1 port 44796 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:15.365290 sshd-session[1511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:15.396001 systemd-logind[1442]: New session 1 of user core. Jan 30 14:22:15.399328 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:22:15.411870 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:22:15.460620 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:22:15.472832 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:22:15.488490 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:22:15.615225 systemd[1538]: Queued start job for default target default.target. Jan 30 14:22:15.622492 systemd[1538]: Created slice app.slice - User Application Slice. Jan 30 14:22:15.622521 systemd[1538]: Reached target paths.target - Paths. Jan 30 14:22:15.622537 systemd[1538]: Reached target timers.target - Timers. Jan 30 14:22:15.624140 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:22:15.650148 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:22:15.650272 systemd[1538]: Reached target sockets.target - Sockets. Jan 30 14:22:15.650290 systemd[1538]: Reached target basic.target - Basic System. Jan 30 14:22:15.650336 systemd[1538]: Reached target default.target - Main User Target. Jan 30 14:22:15.650364 systemd[1538]: Startup finished in 148ms. Jan 30 14:22:15.650701 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:22:15.657835 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:22:16.156880 systemd[1]: Started sshd@1-172.24.4.105:22-172.24.4.1:59594.service - OpenSSH per-connection server daemon (172.24.4.1:59594). Jan 30 14:22:16.526852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:16.541496 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:17.869030 kubelet[1555]: E0130 14:22:17.868986 1555 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:17.872779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:17.873095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:17.873923 systemd[1]: kubelet.service: Consumed 1.963s CPU time. Jan 30 14:22:17.876337 sshd[1549]: Accepted publickey for core from 172.24.4.1 port 59594 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:17.878702 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:17.884359 systemd-logind[1442]: New session 2 of user core. Jan 30 14:22:17.891754 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:22:18.583501 sshd[1566]: Connection closed by 172.24.4.1 port 59594 Jan 30 14:22:18.584556 sshd-session[1549]: pam_unix(sshd:session): session closed for user core Jan 30 14:22:18.597096 systemd[1]: sshd@1-172.24.4.105:22-172.24.4.1:59594.service: Deactivated successfully. Jan 30 14:22:18.600419 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:22:18.604983 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:22:18.611839 systemd[1]: Started sshd@2-172.24.4.105:22-172.24.4.1:59598.service - OpenSSH per-connection server daemon (172.24.4.1:59598). Jan 30 14:22:18.620347 systemd-logind[1442]: Removed session 2. Jan 30 14:22:19.573082 agetty[1518]: failed to open credentials directory Jan 30 14:22:19.573243 agetty[1519]: failed to open credentials directory Jan 30 14:22:19.592891 login[1518]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 14:22:19.606943 systemd-logind[1442]: New session 3 of user core. Jan 30 14:22:19.612086 login[1519]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 14:22:19.614028 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:22:19.626371 systemd-logind[1442]: New session 4 of user core. Jan 30 14:22:19.633923 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:22:19.818936 sshd[1571]: Accepted publickey for core from 172.24.4.1 port 59598 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:19.821820 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:19.831217 systemd-logind[1442]: New session 5 of user core. Jan 30 14:22:19.845013 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:22:20.569123 sshd[1599]: Connection closed by 172.24.4.1 port 59598 Jan 30 14:22:20.568221 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Jan 30 14:22:20.573734 systemd[1]: sshd@2-172.24.4.105:22-172.24.4.1:59598.service: Deactivated successfully. Jan 30 14:22:20.577504 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:22:20.580997 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:22:20.583881 systemd-logind[1442]: Removed session 5. Jan 30 14:22:20.751290 coreos-metadata[1424]: Jan 30 14:22:20.751 WARN failed to locate config-drive, using the metadata service API instead Jan 30 14:22:20.798764 coreos-metadata[1424]: Jan 30 14:22:20.798 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 14:22:20.987667 coreos-metadata[1424]: Jan 30 14:22:20.986 INFO Fetch successful Jan 30 14:22:20.987667 coreos-metadata[1424]: Jan 30 14:22:20.986 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 14:22:21.001740 coreos-metadata[1424]: Jan 30 14:22:21.001 INFO Fetch successful Jan 30 14:22:21.001930 coreos-metadata[1424]: Jan 30 14:22:21.001 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 14:22:21.014266 coreos-metadata[1424]: Jan 30 14:22:21.014 INFO Fetch successful Jan 30 14:22:21.014266 coreos-metadata[1424]: Jan 30 14:22:21.014 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 14:22:21.028877 coreos-metadata[1424]: Jan 30 14:22:21.028 INFO Fetch successful Jan 30 14:22:21.028988 coreos-metadata[1424]: Jan 30 14:22:21.028 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 14:22:21.042361 coreos-metadata[1424]: Jan 30 14:22:21.042 INFO Fetch successful Jan 30 14:22:21.042361 coreos-metadata[1424]: Jan 30 14:22:21.042 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 14:22:21.055409 coreos-metadata[1424]: Jan 30 14:22:21.055 INFO Fetch successful Jan 30 14:22:21.100386 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:22:21.101983 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:22:21.212845 coreos-metadata[1490]: Jan 30 14:22:21.212 WARN failed to locate config-drive, using the metadata service API instead Jan 30 14:22:21.255264 coreos-metadata[1490]: Jan 30 14:22:21.254 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 14:22:21.272723 coreos-metadata[1490]: Jan 30 14:22:21.272 INFO Fetch successful Jan 30 14:22:21.272723 coreos-metadata[1490]: Jan 30 14:22:21.272 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 14:22:21.287283 coreos-metadata[1490]: Jan 30 14:22:21.287 INFO Fetch successful Jan 30 14:22:21.293400 unknown[1490]: wrote ssh authorized keys file for user: core Jan 30 14:22:21.327182 update-ssh-keys[1612]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:22:21.328210 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:22:21.332361 systemd[1]: Finished sshkeys.service. Jan 30 14:22:21.336414 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:22:21.336988 systemd[1]: Startup finished in 1.230s (kernel) + 23.813s (initrd) + 10.738s (userspace) = 35.782s. Jan 30 14:22:27.988053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:22:27.998949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:28.315532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:28.334109 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:28.432672 kubelet[1624]: E0130 14:22:28.432623 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:28.436263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:28.436508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:30.590084 systemd[1]: Started sshd@3-172.24.4.105:22-172.24.4.1:38626.service - OpenSSH per-connection server daemon (172.24.4.1:38626). Jan 30 14:22:31.928194 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 38626 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:31.930886 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:31.941925 systemd-logind[1442]: New session 6 of user core. Jan 30 14:22:31.953878 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:22:32.676253 sshd[1635]: Connection closed by 172.24.4.1 port 38626 Jan 30 14:22:32.677546 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Jan 30 14:22:32.688479 systemd[1]: sshd@3-172.24.4.105:22-172.24.4.1:38626.service: Deactivated successfully. Jan 30 14:22:32.692466 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:22:32.694941 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:22:32.703146 systemd[1]: Started sshd@4-172.24.4.105:22-172.24.4.1:38638.service - OpenSSH per-connection server daemon (172.24.4.1:38638). Jan 30 14:22:32.706849 systemd-logind[1442]: Removed session 6. Jan 30 14:22:34.195301 sshd[1640]: Accepted publickey for core from 172.24.4.1 port 38638 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:34.197876 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:34.209067 systemd-logind[1442]: New session 7 of user core. Jan 30 14:22:34.216875 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:22:34.933721 sshd[1642]: Connection closed by 172.24.4.1 port 38638 Jan 30 14:22:34.934700 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Jan 30 14:22:34.947447 systemd[1]: sshd@4-172.24.4.105:22-172.24.4.1:38638.service: Deactivated successfully. Jan 30 14:22:34.950733 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:22:34.955693 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:22:34.963168 systemd[1]: Started sshd@5-172.24.4.105:22-172.24.4.1:55084.service - OpenSSH per-connection server daemon (172.24.4.1:55084). Jan 30 14:22:34.966637 systemd-logind[1442]: Removed session 7. Jan 30 14:22:36.776790 sshd[1647]: Accepted publickey for core from 172.24.4.1 port 55084 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:36.779343 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:36.790004 systemd-logind[1442]: New session 8 of user core. Jan 30 14:22:36.800943 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:22:37.515787 sshd[1649]: Connection closed by 172.24.4.1 port 55084 Jan 30 14:22:37.516783 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jan 30 14:22:37.529659 systemd[1]: sshd@5-172.24.4.105:22-172.24.4.1:55084.service: Deactivated successfully. Jan 30 14:22:37.533046 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:22:37.536867 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:22:37.549089 systemd[1]: Started sshd@6-172.24.4.105:22-172.24.4.1:55092.service - OpenSSH per-connection server daemon (172.24.4.1:55092). Jan 30 14:22:37.551667 systemd-logind[1442]: Removed session 8. Jan 30 14:22:38.488055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:22:38.498977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:38.714360 sshd[1654]: Accepted publickey for core from 172.24.4.1 port 55092 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:38.717948 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:38.732152 systemd-logind[1442]: New session 9 of user core. Jan 30 14:22:38.741910 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:22:38.842906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:38.854136 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:38.939136 kubelet[1665]: E0130 14:22:38.939008 1665 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:38.943280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:38.943655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:39.177291 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:22:39.177979 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:22:39.194525 sudo[1673]: pam_unix(sudo:session): session closed for user root Jan 30 14:22:39.425526 sshd[1659]: Connection closed by 172.24.4.1 port 55092 Jan 30 14:22:39.426023 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Jan 30 14:22:39.437363 systemd[1]: sshd@6-172.24.4.105:22-172.24.4.1:55092.service: Deactivated successfully. Jan 30 14:22:39.440485 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:22:39.443980 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:22:39.456179 systemd[1]: Started sshd@7-172.24.4.105:22-172.24.4.1:55108.service - OpenSSH per-connection server daemon (172.24.4.1:55108). Jan 30 14:22:39.459248 systemd-logind[1442]: Removed session 9. Jan 30 14:22:40.582393 sshd[1678]: Accepted publickey for core from 172.24.4.1 port 55108 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:40.585035 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:40.596084 systemd-logind[1442]: New session 10 of user core. Jan 30 14:22:40.604878 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:22:41.062054 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:22:41.062757 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:22:41.069820 sudo[1682]: pam_unix(sudo:session): session closed for user root Jan 30 14:22:41.080932 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 14:22:41.082296 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:22:41.108219 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 14:22:41.175051 augenrules[1704]: No rules Jan 30 14:22:41.176920 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:22:41.177301 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 14:22:41.179769 sudo[1681]: pam_unix(sudo:session): session closed for user root Jan 30 14:22:41.371776 sshd[1680]: Connection closed by 172.24.4.1 port 55108 Jan 30 14:22:41.373433 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jan 30 14:22:41.386547 systemd[1]: sshd@7-172.24.4.105:22-172.24.4.1:55108.service: Deactivated successfully. Jan 30 14:22:41.389986 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:22:41.393347 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:22:41.399130 systemd[1]: Started sshd@8-172.24.4.105:22-172.24.4.1:55124.service - OpenSSH per-connection server daemon (172.24.4.1:55124). Jan 30 14:22:41.402228 systemd-logind[1442]: Removed session 10. Jan 30 14:22:42.564357 sshd[1712]: Accepted publickey for core from 172.24.4.1 port 55124 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:22:42.567130 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:22:42.576906 systemd-logind[1442]: New session 11 of user core. Jan 30 14:22:42.585878 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:22:43.044418 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:22:43.045123 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:22:43.715904 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:22:43.716914 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:22:44.217081 dockerd[1734]: time="2025-01-30T14:22:44.216707383Z" level=info msg="Starting up" Jan 30 14:22:44.457711 dockerd[1734]: time="2025-01-30T14:22:44.457625394Z" level=info msg="Loading containers: start." Jan 30 14:22:44.629669 kernel: Initializing XFRM netlink socket Jan 30 14:22:44.806931 systemd-networkd[1366]: docker0: Link UP Jan 30 14:22:44.854348 dockerd[1734]: time="2025-01-30T14:22:44.854197776Z" level=info msg="Loading containers: done." Jan 30 14:22:44.893923 dockerd[1734]: time="2025-01-30T14:22:44.892000124Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:22:44.893923 dockerd[1734]: time="2025-01-30T14:22:44.892211330Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 14:22:44.893923 dockerd[1734]: time="2025-01-30T14:22:44.892427435Z" level=info msg="Daemon has completed initialization" Jan 30 14:22:44.960553 dockerd[1734]: time="2025-01-30T14:22:44.960393606Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:22:44.960792 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:22:46.822782 containerd[1456]: time="2025-01-30T14:22:46.822726893Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:22:47.570393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190928711.mount: Deactivated successfully. Jan 30 14:22:48.987267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:22:48.993754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:49.112708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:49.120845 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:49.188750 kubelet[1989]: E0130 14:22:49.188447 1989 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:49.190285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:49.190452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:49.688735 containerd[1456]: time="2025-01-30T14:22:49.688686863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:49.689869 containerd[1456]: time="2025-01-30T14:22:49.689843210Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 30 14:22:49.691593 containerd[1456]: time="2025-01-30T14:22:49.691537985Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:49.696257 containerd[1456]: time="2025-01-30T14:22:49.696211963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:49.697625 containerd[1456]: time="2025-01-30T14:22:49.697437970Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.874583809s" Jan 30 14:22:49.697625 containerd[1456]: time="2025-01-30T14:22:49.697470041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 14:22:49.720492 containerd[1456]: time="2025-01-30T14:22:49.720270193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:22:52.036604 containerd[1456]: time="2025-01-30T14:22:52.036346182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:52.037978 containerd[1456]: time="2025-01-30T14:22:52.037940309Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 30 14:22:52.038879 containerd[1456]: time="2025-01-30T14:22:52.038857528Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:52.042724 containerd[1456]: time="2025-01-30T14:22:52.042311812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:52.043467 containerd[1456]: time="2025-01-30T14:22:52.043434186Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.323130069s" Jan 30 14:22:52.043512 containerd[1456]: time="2025-01-30T14:22:52.043465404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 14:22:52.068448 containerd[1456]: time="2025-01-30T14:22:52.068411284Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:22:53.785604 containerd[1456]: time="2025-01-30T14:22:53.785325456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:53.787559 containerd[1456]: time="2025-01-30T14:22:53.787526362Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 30 14:22:53.788832 containerd[1456]: time="2025-01-30T14:22:53.788794288Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:53.792398 containerd[1456]: time="2025-01-30T14:22:53.792332789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:53.793688 containerd[1456]: time="2025-01-30T14:22:53.793473798Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.724816562s" Jan 30 14:22:53.793688 containerd[1456]: time="2025-01-30T14:22:53.793502812Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 14:22:53.817490 containerd[1456]: time="2025-01-30T14:22:53.817457237Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:22:55.174870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703020253.mount: Deactivated successfully. Jan 30 14:22:55.689769 containerd[1456]: time="2025-01-30T14:22:55.689729101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:55.690871 containerd[1456]: time="2025-01-30T14:22:55.690843229Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 14:22:55.692145 containerd[1456]: time="2025-01-30T14:22:55.692120493Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:55.694499 containerd[1456]: time="2025-01-30T14:22:55.694479154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:55.695174 containerd[1456]: time="2025-01-30T14:22:55.695134362Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.877475898s" Jan 30 14:22:55.695223 containerd[1456]: time="2025-01-30T14:22:55.695173245Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 14:22:55.717010 containerd[1456]: time="2025-01-30T14:22:55.716963667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:22:56.299376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243824339.mount: Deactivated successfully. Jan 30 14:22:58.001857 containerd[1456]: time="2025-01-30T14:22:58.001649726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:58.004822 containerd[1456]: time="2025-01-30T14:22:58.004738626Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 14:22:58.006316 containerd[1456]: time="2025-01-30T14:22:58.006184536Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:58.023620 containerd[1456]: time="2025-01-30T14:22:58.023508381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:58.029656 containerd[1456]: time="2025-01-30T14:22:58.027656487Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.310637345s" Jan 30 14:22:58.029656 containerd[1456]: time="2025-01-30T14:22:58.027740264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 14:22:58.081631 containerd[1456]: time="2025-01-30T14:22:58.081013457Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:22:58.629123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463518008.mount: Deactivated successfully. Jan 30 14:22:58.640695 containerd[1456]: time="2025-01-30T14:22:58.640425080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:58.642526 containerd[1456]: time="2025-01-30T14:22:58.642422354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 30 14:22:58.644023 containerd[1456]: time="2025-01-30T14:22:58.643901997Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:58.651130 containerd[1456]: time="2025-01-30T14:22:58.650998170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:22:58.653702 containerd[1456]: time="2025-01-30T14:22:58.653357913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 572.224561ms" Jan 30 14:22:58.653702 containerd[1456]: time="2025-01-30T14:22:58.653443172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 14:22:58.702425 containerd[1456]: time="2025-01-30T14:22:58.702337827Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:22:59.238221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:22:59.251146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:22:59.354628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929935981.mount: Deactivated successfully. Jan 30 14:22:59.432802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:22:59.452389 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:22:59.524069 kubelet[2099]: E0130 14:22:59.523853 2099 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:22:59.526072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:22:59.526217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:22:59.570662 update_engine[1444]: I20250130 14:22:59.570314 1444 update_attempter.cc:509] Updating boot flags... Jan 30 14:22:59.761782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2115) Jan 30 14:22:59.828965 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2113) Jan 30 14:22:59.885715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2113) Jan 30 14:23:02.704027 containerd[1456]: time="2025-01-30T14:23:02.703900388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:02.709394 containerd[1456]: time="2025-01-30T14:23:02.709268784Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 30 14:23:02.719612 containerd[1456]: time="2025-01-30T14:23:02.714876055Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:02.728097 containerd[1456]: time="2025-01-30T14:23:02.727969914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:02.730426 containerd[1456]: time="2025-01-30T14:23:02.730139691Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.027728957s" Jan 30 14:23:02.730426 containerd[1456]: time="2025-01-30T14:23:02.730212648Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 14:23:07.054651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:07.061966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:07.096546 systemd[1]: Reloading requested from client PID 2228 ('systemctl') (unit session-11.scope)... Jan 30 14:23:07.096668 systemd[1]: Reloading... Jan 30 14:23:07.216871 zram_generator::config[2267]: No configuration found. Jan 30 14:23:07.359223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:23:07.444117 systemd[1]: Reloading finished in 346 ms. Jan 30 14:23:07.494834 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:23:07.494900 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:23:07.495214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:07.498022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:07.949953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:07.953361 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:23:08.016896 kubelet[2335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:23:08.016896 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:23:08.016896 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:23:08.017302 kubelet[2335]: I0130 14:23:08.017015 2335 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:23:08.427207 kubelet[2335]: I0130 14:23:08.427063 2335 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:23:08.427207 kubelet[2335]: I0130 14:23:08.427090 2335 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:23:08.427612 kubelet[2335]: I0130 14:23:08.427296 2335 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:23:08.919516 kubelet[2335]: I0130 14:23:08.919375 2335 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:23:08.956871 kubelet[2335]: E0130 14:23:08.956765 2335 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.149678 kubelet[2335]: I0130 14:23:09.149317 2335 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:23:09.151170 kubelet[2335]: I0130 14:23:09.149850 2335 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:23:09.151170 kubelet[2335]: I0130 14:23:09.149938 2335 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-5-d272c7c7c0.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:23:09.151170 kubelet[2335]: I0130 14:23:09.150675 2335 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:23:09.151170 kubelet[2335]: I0130 14:23:09.150704 2335 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:23:09.207016 kubelet[2335]: I0130 14:23:09.206860 2335 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:23:09.218749 kubelet[2335]: I0130 14:23:09.218714 2335 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:23:09.219334 kubelet[2335]: I0130 14:23:09.218899 2335 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:23:09.219334 kubelet[2335]: I0130 14:23:09.218961 2335 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:23:09.219334 kubelet[2335]: I0130 14:23:09.218989 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:23:09.250412 kubelet[2335]: W0130 14:23:09.249740 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-5-d272c7c7c0.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.250412 kubelet[2335]: E0130 14:23:09.249894 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-5-d272c7c7c0.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.260895 kubelet[2335]: W0130 14:23:09.260735 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.260895 kubelet[2335]: E0130 14:23:09.260840 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.261826 kubelet[2335]: I0130 14:23:09.261695 2335 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 14:23:09.265698 kubelet[2335]: I0130 14:23:09.265639 2335 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:23:09.265888 kubelet[2335]: W0130 14:23:09.265767 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:23:09.267287 kubelet[2335]: I0130 14:23:09.267035 2335 server.go:1264] "Started kubelet" Jan 30 14:23:09.269635 kubelet[2335]: I0130 14:23:09.269539 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:23:09.280097 kubelet[2335]: E0130 14:23:09.276967 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.105:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.105:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-5-d272c7c7c0.novalocal.181f7e6d57269048 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-5-d272c7c7c0.novalocal,UID:ci-4186-1-0-5-d272c7c7c0.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-5-d272c7c7c0.novalocal,},FirstTimestamp:2025-01-30 14:23:09.266980936 +0000 UTC m=+1.310778989,LastTimestamp:2025-01-30 14:23:09.266980936 +0000 UTC m=+1.310778989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-5-d272c7c7c0.novalocal,}" Jan 30 14:23:09.280097 kubelet[2335]: I0130 14:23:09.277247 2335 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:23:09.280097 kubelet[2335]: I0130 14:23:09.279866 2335 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:23:09.281617 kubelet[2335]: I0130 14:23:09.281540 2335 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:23:09.283681 kubelet[2335]: I0130 14:23:09.283549 2335 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:23:09.284180 kubelet[2335]: I0130 14:23:09.284147 2335 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:23:09.285507 kubelet[2335]: I0130 14:23:09.285460 2335 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:23:09.285702 kubelet[2335]: I0130 14:23:09.285670 2335 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:23:09.292400 kubelet[2335]: E0130 14:23:09.292333 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-5-d272c7c7c0.novalocal?timeout=10s\": dial tcp 172.24.4.105:6443: connect: connection refused" interval="200ms" Jan 30 14:23:09.294139 kubelet[2335]: I0130 14:23:09.294091 2335 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:23:09.299196 kubelet[2335]: W0130 14:23:09.298225 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.299553 kubelet[2335]: E0130 14:23:09.299485 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.300209 kubelet[2335]: I0130 14:23:09.300174 2335 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:23:09.300362 kubelet[2335]: I0130 14:23:09.300342 2335 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:23:09.301086 kubelet[2335]: E0130 14:23:09.301047 2335 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:23:09.321527 kubelet[2335]: I0130 14:23:09.321420 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:23:09.325284 kubelet[2335]: I0130 14:23:09.324607 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:23:09.325284 kubelet[2335]: I0130 14:23:09.324666 2335 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:23:09.325284 kubelet[2335]: I0130 14:23:09.324700 2335 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:23:09.325284 kubelet[2335]: E0130 14:23:09.324784 2335 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:23:09.329207 kubelet[2335]: W0130 14:23:09.329098 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.329524 kubelet[2335]: E0130 14:23:09.329446 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:09.335100 kubelet[2335]: I0130 14:23:09.335081 2335 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:23:09.335303 kubelet[2335]: I0130 14:23:09.335230 2335 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:23:09.335386 kubelet[2335]: I0130 14:23:09.335377 2335 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:23:09.341946 kubelet[2335]: I0130 14:23:09.341932 2335 policy_none.go:49] "None policy: Start" Jan 30 14:23:09.342678 kubelet[2335]: I0130 14:23:09.342612 2335 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:23:09.342678 kubelet[2335]: I0130 14:23:09.342637 2335 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:23:09.351251 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:23:09.366106 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:23:09.369088 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:23:09.378327 kubelet[2335]: I0130 14:23:09.378291 2335 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:23:09.378764 kubelet[2335]: I0130 14:23:09.378443 2335 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:23:09.378764 kubelet[2335]: I0130 14:23:09.378535 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:23:09.382246 kubelet[2335]: E0130 14:23:09.382217 2335 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" not found" Jan 30 14:23:09.384030 kubelet[2335]: I0130 14:23:09.384008 2335 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.384637 kubelet[2335]: E0130 14:23:09.384467 2335 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.105:6443/api/v1/nodes\": dial tcp 172.24.4.105:6443: connect: connection refused" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.425259 kubelet[2335]: I0130 14:23:09.425119 2335 topology_manager.go:215] "Topology Admit Handler" podUID="f1c2692d0e7478938a09f92c556b1d23" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.427757 kubelet[2335]: I0130 14:23:09.427695 2335 topology_manager.go:215] "Topology Admit Handler" podUID="959a1b41a9a24cbeac9a2830a213edde" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.430075 kubelet[2335]: I0130 14:23:09.429725 2335 topology_manager.go:215] "Topology Admit Handler" podUID="8038f94267c8076e9a078f36fb24846e" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.442022 systemd[1]: Created slice kubepods-burstable-podf1c2692d0e7478938a09f92c556b1d23.slice - libcontainer container kubepods-burstable-podf1c2692d0e7478938a09f92c556b1d23.slice. Jan 30 14:23:09.463935 systemd[1]: Created slice kubepods-burstable-pod959a1b41a9a24cbeac9a2830a213edde.slice - libcontainer container kubepods-burstable-pod959a1b41a9a24cbeac9a2830a213edde.slice. Jan 30 14:23:09.484888 systemd[1]: Created slice kubepods-burstable-pod8038f94267c8076e9a078f36fb24846e.slice - libcontainer container kubepods-burstable-pod8038f94267c8076e9a078f36fb24846e.slice. Jan 30 14:23:09.487132 kubelet[2335]: I0130 14:23:09.486788 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1c2692d0e7478938a09f92c556b1d23-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"f1c2692d0e7478938a09f92c556b1d23\") " pod="kube-system/kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.487132 kubelet[2335]: I0130 14:23:09.486841 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1c2692d0e7478938a09f92c556b1d23-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"f1c2692d0e7478938a09f92c556b1d23\") " pod="kube-system/kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.488066 kubelet[2335]: I0130 14:23:09.487428 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1c2692d0e7478938a09f92c556b1d23-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"f1c2692d0e7478938a09f92c556b1d23\") " pod="kube-system/kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.488066 kubelet[2335]: I0130 14:23:09.487614 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.488066 kubelet[2335]: I0130 14:23:09.487671 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8038f94267c8076e9a078f36fb24846e-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"8038f94267c8076e9a078f36fb24846e\") " pod="kube-system/kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.488066 kubelet[2335]: I0130 14:23:09.487714 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.488266 kubelet[2335]: I0130 14:23:09.487760 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.488266 kubelet[2335]: I0130 14:23:09.487812 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.488266 kubelet[2335]: I0130 14:23:09.487856 2335 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.493494 kubelet[2335]: E0130 14:23:09.493427 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-5-d272c7c7c0.novalocal?timeout=10s\": dial tcp 172.24.4.105:6443: connect: connection refused" interval="400ms" Jan 30 14:23:09.588317 kubelet[2335]: I0130 14:23:09.588241 2335 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.589207 kubelet[2335]: E0130 14:23:09.588804 2335 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.105:6443/api/v1/nodes\": dial tcp 172.24.4.105:6443: connect: connection refused" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.762795 containerd[1456]: time="2025-01-30T14:23:09.762660822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal,Uid:f1c2692d0e7478938a09f92c556b1d23,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:09.782769 containerd[1456]: time="2025-01-30T14:23:09.782622361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal,Uid:959a1b41a9a24cbeac9a2830a213edde,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:09.791709 containerd[1456]: time="2025-01-30T14:23:09.791522659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal,Uid:8038f94267c8076e9a078f36fb24846e,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:09.895011 kubelet[2335]: E0130 14:23:09.894892 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-5-d272c7c7c0.novalocal?timeout=10s\": dial tcp 172.24.4.105:6443: connect: connection refused" interval="800ms" Jan 30 14:23:09.993100 kubelet[2335]: I0130 14:23:09.993039 2335 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:09.993724 kubelet[2335]: E0130 14:23:09.993662 2335 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.105:6443/api/v1/nodes\": dial tcp 172.24.4.105:6443: connect: connection refused" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:10.333802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103865060.mount: Deactivated successfully. Jan 30 14:23:10.347628 containerd[1456]: time="2025-01-30T14:23:10.347482982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:10.350059 containerd[1456]: time="2025-01-30T14:23:10.349953184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 14:23:10.354335 containerd[1456]: time="2025-01-30T14:23:10.354207130Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:10.357195 containerd[1456]: time="2025-01-30T14:23:10.357102118Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:10.358552 containerd[1456]: time="2025-01-30T14:23:10.358468560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:23:10.363646 containerd[1456]: time="2025-01-30T14:23:10.362151415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:23:10.363646 containerd[1456]: time="2025-01-30T14:23:10.362361650Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:10.371635 containerd[1456]: time="2025-01-30T14:23:10.371530692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:23:10.380517 containerd[1456]: time="2025-01-30T14:23:10.380393641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.438251ms" Jan 30 14:23:10.384871 containerd[1456]: time="2025-01-30T14:23:10.384801876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 621.100442ms" Jan 30 14:23:10.393135 containerd[1456]: time="2025-01-30T14:23:10.392963681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.180338ms" Jan 30 14:23:10.525589 kubelet[2335]: W0130 14:23:10.525493 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:10.525589 kubelet[2335]: E0130 14:23:10.525598 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:10.554207 kubelet[2335]: W0130 14:23:10.554132 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:10.554207 kubelet[2335]: E0130 14:23:10.554210 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:10.574326 containerd[1456]: time="2025-01-30T14:23:10.569278211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:10.574326 containerd[1456]: time="2025-01-30T14:23:10.574045320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:10.574326 containerd[1456]: time="2025-01-30T14:23:10.574078011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:10.574326 containerd[1456]: time="2025-01-30T14:23:10.574211120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:10.575044 containerd[1456]: time="2025-01-30T14:23:10.574979622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:10.575187 containerd[1456]: time="2025-01-30T14:23:10.575148839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:10.575311 containerd[1456]: time="2025-01-30T14:23:10.575273413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:10.575513 containerd[1456]: time="2025-01-30T14:23:10.575473588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:10.575727 containerd[1456]: time="2025-01-30T14:23:10.575336431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:10.575727 containerd[1456]: time="2025-01-30T14:23:10.575691537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:10.575986 containerd[1456]: time="2025-01-30T14:23:10.575939882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:10.576774 containerd[1456]: time="2025-01-30T14:23:10.576739121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:10.607777 systemd[1]: Started cri-containerd-2ebe521d6a5c4ff7a6e35853ef6f84edc2590f2671b8cf7f7b1ed522550a427e.scope - libcontainer container 2ebe521d6a5c4ff7a6e35853ef6f84edc2590f2671b8cf7f7b1ed522550a427e. Jan 30 14:23:10.610181 systemd[1]: Started cri-containerd-4a03b148bdb1b3ed03a12aa25cb87d99032d077dbfec6627d529dcdbbfaf4e72.scope - libcontainer container 4a03b148bdb1b3ed03a12aa25cb87d99032d077dbfec6627d529dcdbbfaf4e72. Jan 30 14:23:10.617825 systemd[1]: Started cri-containerd-662cd06fdcdbab67015b295c929cc3bd2835985ab454917d58a8f1f058f0ac33.scope - libcontainer container 662cd06fdcdbab67015b295c929cc3bd2835985ab454917d58a8f1f058f0ac33. Jan 30 14:23:10.677294 containerd[1456]: time="2025-01-30T14:23:10.677228194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal,Uid:8038f94267c8076e9a078f36fb24846e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a03b148bdb1b3ed03a12aa25cb87d99032d077dbfec6627d529dcdbbfaf4e72\"" Jan 30 14:23:10.686362 containerd[1456]: time="2025-01-30T14:23:10.686195198Z" level=info msg="CreateContainer within sandbox \"4a03b148bdb1b3ed03a12aa25cb87d99032d077dbfec6627d529dcdbbfaf4e72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:23:10.691659 containerd[1456]: time="2025-01-30T14:23:10.691627242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal,Uid:f1c2692d0e7478938a09f92c556b1d23,Namespace:kube-system,Attempt:0,} returns sandbox id \"662cd06fdcdbab67015b295c929cc3bd2835985ab454917d58a8f1f058f0ac33\"" Jan 30 14:23:10.694648 containerd[1456]: time="2025-01-30T14:23:10.694621958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal,Uid:959a1b41a9a24cbeac9a2830a213edde,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ebe521d6a5c4ff7a6e35853ef6f84edc2590f2671b8cf7f7b1ed522550a427e\"" Jan 30 14:23:10.696130 kubelet[2335]: E0130 14:23:10.695532 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-5-d272c7c7c0.novalocal?timeout=10s\": dial tcp 172.24.4.105:6443: connect: connection refused" interval="1.6s" Jan 30 14:23:10.698227 containerd[1456]: time="2025-01-30T14:23:10.698206379Z" level=info msg="CreateContainer within sandbox \"662cd06fdcdbab67015b295c929cc3bd2835985ab454917d58a8f1f058f0ac33\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:23:10.698554 containerd[1456]: time="2025-01-30T14:23:10.698333096Z" level=info msg="CreateContainer within sandbox \"2ebe521d6a5c4ff7a6e35853ef6f84edc2590f2671b8cf7f7b1ed522550a427e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:23:10.701981 kubelet[2335]: W0130 14:23:10.701936 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:10.703497 kubelet[2335]: E0130 14:23:10.701992 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:10.706225 kubelet[2335]: W0130 14:23:10.706192 2335 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-5-d272c7c7c0.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:10.706384 kubelet[2335]: E0130 14:23:10.706232 2335 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-5-d272c7c7c0.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.105:6443: connect: connection refused Jan 30 14:23:10.726648 containerd[1456]: time="2025-01-30T14:23:10.726563601Z" level=info msg="CreateContainer within sandbox \"4a03b148bdb1b3ed03a12aa25cb87d99032d077dbfec6627d529dcdbbfaf4e72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fbcb8e228ac8bc0b01e93fff59161c7348a48ab134cd4037dcd16a67bdfa8aca\"" Jan 30 14:23:10.727174 containerd[1456]: time="2025-01-30T14:23:10.727139090Z" level=info msg="StartContainer for \"fbcb8e228ac8bc0b01e93fff59161c7348a48ab134cd4037dcd16a67bdfa8aca\"" Jan 30 14:23:10.739892 containerd[1456]: time="2025-01-30T14:23:10.739836498Z" level=info msg="CreateContainer within sandbox \"2ebe521d6a5c4ff7a6e35853ef6f84edc2590f2671b8cf7f7b1ed522550a427e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76953ace260fec16d249fb1da7f4094aa86f2f84fd669e95f84a16459ec2c14a\"" Jan 30 14:23:10.741628 containerd[1456]: time="2025-01-30T14:23:10.740657488Z" level=info msg="StartContainer for \"76953ace260fec16d249fb1da7f4094aa86f2f84fd669e95f84a16459ec2c14a\"" Jan 30 14:23:10.751763 containerd[1456]: time="2025-01-30T14:23:10.751662463Z" level=info msg="CreateContainer within sandbox \"662cd06fdcdbab67015b295c929cc3bd2835985ab454917d58a8f1f058f0ac33\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"615d3af6779989bd2ee476dbb4731782a246ab6663d0603c1ac88767ba44eb40\"" Jan 30 14:23:10.752333 containerd[1456]: time="2025-01-30T14:23:10.752291972Z" level=info msg="StartContainer for \"615d3af6779989bd2ee476dbb4731782a246ab6663d0603c1ac88767ba44eb40\"" Jan 30 14:23:10.761787 systemd[1]: Started cri-containerd-fbcb8e228ac8bc0b01e93fff59161c7348a48ab134cd4037dcd16a67bdfa8aca.scope - libcontainer container fbcb8e228ac8bc0b01e93fff59161c7348a48ab134cd4037dcd16a67bdfa8aca. Jan 30 14:23:10.788754 systemd[1]: Started cri-containerd-76953ace260fec16d249fb1da7f4094aa86f2f84fd669e95f84a16459ec2c14a.scope - libcontainer container 76953ace260fec16d249fb1da7f4094aa86f2f84fd669e95f84a16459ec2c14a. Jan 30 14:23:10.793669 systemd[1]: Started cri-containerd-615d3af6779989bd2ee476dbb4731782a246ab6663d0603c1ac88767ba44eb40.scope - libcontainer container 615d3af6779989bd2ee476dbb4731782a246ab6663d0603c1ac88767ba44eb40. Jan 30 14:23:10.796928 kubelet[2335]: I0130 14:23:10.796873 2335 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:10.797703 kubelet[2335]: E0130 14:23:10.797488 2335 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.105:6443/api/v1/nodes\": dial tcp 172.24.4.105:6443: connect: connection refused" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:10.852588 containerd[1456]: time="2025-01-30T14:23:10.851900846Z" level=info msg="StartContainer for \"76953ace260fec16d249fb1da7f4094aa86f2f84fd669e95f84a16459ec2c14a\" returns successfully" Jan 30 14:23:10.867605 containerd[1456]: time="2025-01-30T14:23:10.866772220Z" level=info msg="StartContainer for \"fbcb8e228ac8bc0b01e93fff59161c7348a48ab134cd4037dcd16a67bdfa8aca\" returns successfully" Jan 30 14:23:10.884225 containerd[1456]: time="2025-01-30T14:23:10.884178238Z" level=info msg="StartContainer for \"615d3af6779989bd2ee476dbb4731782a246ab6663d0603c1ac88767ba44eb40\" returns successfully" Jan 30 14:23:12.400560 kubelet[2335]: I0130 14:23:12.400245 2335 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:13.505471 kubelet[2335]: E0130 14:23:13.505427 2335 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-5-d272c7c7c0.novalocal\" not found" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:13.567984 kubelet[2335]: E0130 14:23:13.567747 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-5-d272c7c7c0.novalocal.181f7e6d57269048 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-5-d272c7c7c0.novalocal,UID:ci-4186-1-0-5-d272c7c7c0.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-5-d272c7c7c0.novalocal,},FirstTimestamp:2025-01-30 14:23:09.266980936 +0000 UTC m=+1.310778989,LastTimestamp:2025-01-30 14:23:09.266980936 +0000 UTC m=+1.310778989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-5-d272c7c7c0.novalocal,}" Jan 30 14:23:13.627480 kubelet[2335]: I0130 14:23:13.627384 2335 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:13.633088 kubelet[2335]: E0130 14:23:13.632903 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-5-d272c7c7c0.novalocal.181f7e6d592deda1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-5-d272c7c7c0.novalocal,UID:ci-4186-1-0-5-d272c7c7c0.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-5-d272c7c7c0.novalocal,},FirstTimestamp:2025-01-30 14:23:09.301018017 +0000 UTC m=+1.344816099,LastTimestamp:2025-01-30 14:23:09.301018017 +0000 UTC m=+1.344816099,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-5-d272c7c7c0.novalocal,}" Jan 30 14:23:13.697502 kubelet[2335]: E0130 14:23:13.697282 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-0-5-d272c7c7c0.novalocal.181f7e6d5b2ba1f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-5-d272c7c7c0.novalocal,UID:ci-4186-1-0-5-d272c7c7c0.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4186-1-0-5-d272c7c7c0.novalocal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-5-d272c7c7c0.novalocal,},FirstTimestamp:2025-01-30 14:23:09.334422 +0000 UTC m=+1.378220012,LastTimestamp:2025-01-30 14:23:09.334422 +0000 UTC m=+1.378220012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-5-d272c7c7c0.novalocal,}" Jan 30 14:23:14.222857 kubelet[2335]: I0130 14:23:14.222658 2335 apiserver.go:52] "Watching apiserver" Jan 30 14:23:14.285794 kubelet[2335]: I0130 14:23:14.285702 2335 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:23:15.958792 systemd[1]: Reloading requested from client PID 2609 ('systemctl') (unit session-11.scope)... Jan 30 14:23:15.959425 systemd[1]: Reloading... Jan 30 14:23:16.075619 zram_generator::config[2648]: No configuration found. Jan 30 14:23:16.213742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:23:16.326323 systemd[1]: Reloading finished in 366 ms. Jan 30 14:23:16.372554 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:16.384358 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:23:16.384547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:16.384619 systemd[1]: kubelet.service: Consumed 1.149s CPU time, 116.7M memory peak, 0B memory swap peak. Jan 30 14:23:16.394064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:23:16.630351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:23:16.641773 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:23:16.748456 kubelet[2712]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:23:16.748456 kubelet[2712]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:23:16.748456 kubelet[2712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:23:16.748456 kubelet[2712]: I0130 14:23:16.748444 2712 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:23:16.759822 kubelet[2712]: I0130 14:23:16.759769 2712 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:23:16.760039 kubelet[2712]: I0130 14:23:16.760021 2712 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:23:16.760509 kubelet[2712]: I0130 14:23:16.760484 2712 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:23:16.764510 kubelet[2712]: I0130 14:23:16.764478 2712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:23:16.766838 kubelet[2712]: I0130 14:23:16.766792 2712 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:23:16.780518 kubelet[2712]: I0130 14:23:16.780480 2712 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:23:16.781099 kubelet[2712]: I0130 14:23:16.781059 2712 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:23:16.781858 kubelet[2712]: I0130 14:23:16.781253 2712 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-5-d272c7c7c0.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:23:16.782230 kubelet[2712]: I0130 14:23:16.782208 2712 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:23:16.782477 kubelet[2712]: I0130 14:23:16.782387 2712 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:23:16.782991 kubelet[2712]: I0130 14:23:16.782891 2712 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:23:16.783532 kubelet[2712]: I0130 14:23:16.783449 2712 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:23:16.783532 kubelet[2712]: I0130 14:23:16.783478 2712 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:23:16.784011 kubelet[2712]: I0130 14:23:16.783636 2712 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:23:16.784011 kubelet[2712]: I0130 14:23:16.783671 2712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:23:16.786074 kubelet[2712]: I0130 14:23:16.786039 2712 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 14:23:16.786262 kubelet[2712]: I0130 14:23:16.786213 2712 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:23:16.787422 kubelet[2712]: I0130 14:23:16.787398 2712 server.go:1264] "Started kubelet" Jan 30 14:23:16.795948 kubelet[2712]: I0130 14:23:16.795918 2712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:23:16.810980 kubelet[2712]: I0130 14:23:16.810938 2712 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:23:16.811762 kubelet[2712]: I0130 14:23:16.811718 2712 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:23:16.813587 kubelet[2712]: I0130 14:23:16.811986 2712 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:23:16.814407 kubelet[2712]: I0130 14:23:16.814387 2712 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:23:16.821856 kubelet[2712]: I0130 14:23:16.821820 2712 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:23:16.822567 kubelet[2712]: I0130 14:23:16.822540 2712 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:23:16.824761 kubelet[2712]: I0130 14:23:16.824733 2712 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:23:16.827364 kubelet[2712]: I0130 14:23:16.827333 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:23:16.828362 kubelet[2712]: I0130 14:23:16.828327 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:23:16.828362 kubelet[2712]: I0130 14:23:16.828364 2712 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:23:16.828448 kubelet[2712]: I0130 14:23:16.828379 2712 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:23:16.828448 kubelet[2712]: E0130 14:23:16.828417 2712 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:23:16.835384 kubelet[2712]: I0130 14:23:16.835358 2712 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:23:16.839186 kubelet[2712]: I0130 14:23:16.836660 2712 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:23:16.839631 kubelet[2712]: E0130 14:23:16.839611 2712 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:23:16.841077 kubelet[2712]: I0130 14:23:16.841049 2712 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:23:16.883188 kubelet[2712]: I0130 14:23:16.883096 2712 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:23:16.883188 kubelet[2712]: I0130 14:23:16.883117 2712 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:23:16.883188 kubelet[2712]: I0130 14:23:16.883133 2712 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:23:16.884398 kubelet[2712]: I0130 14:23:16.884349 2712 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:23:16.884398 kubelet[2712]: I0130 14:23:16.884387 2712 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:23:16.885343 kubelet[2712]: I0130 14:23:16.884409 2712 policy_none.go:49] "None policy: Start" Jan 30 14:23:16.886551 kubelet[2712]: I0130 14:23:16.886528 2712 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:23:16.886551 kubelet[2712]: I0130 14:23:16.886552 2712 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:23:16.886780 kubelet[2712]: I0130 14:23:16.886748 2712 state_mem.go:75] "Updated machine memory state" Jan 30 14:23:16.891028 kubelet[2712]: I0130 14:23:16.891007 2712 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:23:16.891900 kubelet[2712]: I0130 14:23:16.891156 2712 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:23:16.891900 kubelet[2712]: I0130 14:23:16.891243 2712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:23:16.917806 kubelet[2712]: I0130 14:23:16.917784 2712 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:16.926376 sudo[2744]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 14:23:16.926731 sudo[2744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 14:23:16.929147 kubelet[2712]: I0130 14:23:16.929104 2712 topology_manager.go:215] "Topology Admit Handler" podUID="f1c2692d0e7478938a09f92c556b1d23" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:16.930537 kubelet[2712]: I0130 14:23:16.929825 2712 topology_manager.go:215] "Topology Admit Handler" podUID="959a1b41a9a24cbeac9a2830a213edde" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:16.930537 kubelet[2712]: I0130 14:23:16.929891 2712 topology_manager.go:215] "Topology Admit Handler" podUID="8038f94267c8076e9a078f36fb24846e" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:16.940740 kubelet[2712]: I0130 14:23:16.940714 2712 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:16.940984 kubelet[2712]: I0130 14:23:16.940940 2712 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:16.950486 kubelet[2712]: W0130 14:23:16.950441 2712 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:23:16.951132 kubelet[2712]: W0130 14:23:16.951064 2712 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:23:16.951463 kubelet[2712]: W0130 14:23:16.951432 2712 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:23:17.026300 kubelet[2712]: I0130 14:23:17.026083 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.026300 kubelet[2712]: I0130 14:23:17.026124 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.026300 kubelet[2712]: I0130 14:23:17.026149 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.026300 kubelet[2712]: I0130 14:23:17.026170 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1c2692d0e7478938a09f92c556b1d23-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"f1c2692d0e7478938a09f92c556b1d23\") " pod="kube-system/kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.026732 kubelet[2712]: I0130 14:23:17.026190 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1c2692d0e7478938a09f92c556b1d23-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"f1c2692d0e7478938a09f92c556b1d23\") " pod="kube-system/kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.026983 kubelet[2712]: I0130 14:23:17.026541 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.026983 kubelet[2712]: I0130 14:23:17.026878 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/959a1b41a9a24cbeac9a2830a213edde-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"959a1b41a9a24cbeac9a2830a213edde\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.026983 kubelet[2712]: I0130 14:23:17.026899 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8038f94267c8076e9a078f36fb24846e-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"8038f94267c8076e9a078f36fb24846e\") " pod="kube-system/kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.026983 kubelet[2712]: I0130 14:23:17.026916 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1c2692d0e7478938a09f92c556b1d23-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal\" (UID: \"f1c2692d0e7478938a09f92c556b1d23\") " pod="kube-system/kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.469889 sudo[2744]: pam_unix(sudo:session): session closed for user root Jan 30 14:23:17.785834 kubelet[2712]: I0130 14:23:17.784918 2712 apiserver.go:52] "Watching apiserver" Jan 30 14:23:17.823609 kubelet[2712]: I0130 14:23:17.823347 2712 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:23:17.871589 kubelet[2712]: W0130 14:23:17.869891 2712 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:23:17.871589 kubelet[2712]: E0130 14:23:17.870015 2712 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" Jan 30 14:23:17.993728 kubelet[2712]: I0130 14:23:17.993655 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-5-d272c7c7c0.novalocal" podStartSLOduration=1.993393888 podStartE2EDuration="1.993393888s" podCreationTimestamp="2025-01-30 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:17.957626897 +0000 UTC m=+1.311537545" watchObservedRunningTime="2025-01-30 14:23:17.993393888 +0000 UTC m=+1.347304526" Jan 30 14:23:18.014253 kubelet[2712]: I0130 14:23:18.014201 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-5-d272c7c7c0.novalocal" podStartSLOduration=2.014182962 podStartE2EDuration="2.014182962s" podCreationTimestamp="2025-01-30 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:17.998605551 +0000 UTC m=+1.352516189" watchObservedRunningTime="2025-01-30 14:23:18.014182962 +0000 UTC m=+1.368093600" Jan 30 14:23:18.026917 kubelet[2712]: I0130 14:23:18.026870 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-5-d272c7c7c0.novalocal" podStartSLOduration=2.026854674 podStartE2EDuration="2.026854674s" podCreationTimestamp="2025-01-30 14:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:18.015146809 +0000 UTC m=+1.369057457" watchObservedRunningTime="2025-01-30 14:23:18.026854674 +0000 UTC m=+1.380765312" Jan 30 14:23:19.901072 sudo[1715]: pam_unix(sudo:session): session closed for user root Jan 30 14:23:20.183734 sshd[1714]: Connection closed by 172.24.4.1 port 55124 Jan 30 14:23:20.184673 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jan 30 14:23:20.193259 systemd[1]: sshd@8-172.24.4.105:22-172.24.4.1:55124.service: Deactivated successfully. Jan 30 14:23:20.197170 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:23:20.198015 systemd[1]: session-11.scope: Consumed 7.902s CPU time, 188.8M memory peak, 0B memory swap peak. Jan 30 14:23:20.200414 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:23:20.202868 systemd-logind[1442]: Removed session 11. Jan 30 14:23:29.925248 kubelet[2712]: I0130 14:23:29.925164 2712 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:23:29.925949 kubelet[2712]: I0130 14:23:29.925690 2712 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:23:29.925996 containerd[1456]: time="2025-01-30T14:23:29.925489022Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:23:30.534672 kubelet[2712]: I0130 14:23:30.533301 2712 topology_manager.go:215] "Topology Admit Handler" podUID="ed8d7370-6550-4947-adff-ed242ed94233" podNamespace="kube-system" podName="kube-proxy-64v8h" Jan 30 14:23:30.546630 kubelet[2712]: W0130 14:23:30.546103 2712 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-5-d272c7c7c0.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-5-d272c7c7c0.novalocal' and this object Jan 30 14:23:30.546630 kubelet[2712]: E0130 14:23:30.546184 2712 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-5-d272c7c7c0.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-5-d272c7c7c0.novalocal' and this object Jan 30 14:23:30.547104 kubelet[2712]: W0130 14:23:30.547066 2712 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-5-d272c7c7c0.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-5-d272c7c7c0.novalocal' and this object Jan 30 14:23:30.547331 kubelet[2712]: E0130 14:23:30.547296 2712 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-5-d272c7c7c0.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-5-d272c7c7c0.novalocal' and this object Jan 30 14:23:30.555252 systemd[1]: Created slice kubepods-besteffort-poded8d7370_6550_4947_adff_ed242ed94233.slice - libcontainer container kubepods-besteffort-poded8d7370_6550_4947_adff_ed242ed94233.slice. Jan 30 14:23:30.575475 kubelet[2712]: I0130 14:23:30.573784 2712 topology_manager.go:215] "Topology Admit Handler" podUID="af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" podNamespace="kube-system" podName="cilium-vf4p9" Jan 30 14:23:30.582649 systemd[1]: Created slice kubepods-burstable-podaf0ddd45_8ee5_4e7d_a546_0b8226ca1f83.slice - libcontainer container kubepods-burstable-podaf0ddd45_8ee5_4e7d_a546_0b8226ca1f83.slice. Jan 30 14:23:30.626489 kubelet[2712]: I0130 14:23:30.626454 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-cgroup\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.626751 kubelet[2712]: I0130 14:23:30.626732 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed8d7370-6550-4947-adff-ed242ed94233-lib-modules\") pod \"kube-proxy-64v8h\" (UID: \"ed8d7370-6550-4947-adff-ed242ed94233\") " pod="kube-system/kube-proxy-64v8h" Jan 30 14:23:30.626927 kubelet[2712]: I0130 14:23:30.626885 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6vr\" (UniqueName: \"kubernetes.io/projected/ed8d7370-6550-4947-adff-ed242ed94233-kube-api-access-2d6vr\") pod \"kube-proxy-64v8h\" (UID: \"ed8d7370-6550-4947-adff-ed242ed94233\") " pod="kube-system/kube-proxy-64v8h" Jan 30 14:23:30.626983 kubelet[2712]: I0130 14:23:30.626940 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-run\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.626983 kubelet[2712]: I0130 14:23:30.626966 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cni-path\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627093 kubelet[2712]: I0130 14:23:30.626987 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-hubble-tls\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627093 kubelet[2712]: I0130 14:23:30.627007 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed8d7370-6550-4947-adff-ed242ed94233-xtables-lock\") pod \"kube-proxy-64v8h\" (UID: \"ed8d7370-6550-4947-adff-ed242ed94233\") " pod="kube-system/kube-proxy-64v8h" Jan 30 14:23:30.627093 kubelet[2712]: I0130 14:23:30.627028 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-host-proc-sys-net\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627093 kubelet[2712]: I0130 14:23:30.627047 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-hostproc\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627093 kubelet[2712]: I0130 14:23:30.627070 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-lib-modules\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627093 kubelet[2712]: I0130 14:23:30.627089 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-xtables-lock\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627257 kubelet[2712]: I0130 14:23:30.627111 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-host-proc-sys-kernel\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627257 kubelet[2712]: I0130 14:23:30.627134 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnd4\" (UniqueName: \"kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-kube-api-access-ccnd4\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627257 kubelet[2712]: I0130 14:23:30.627157 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-clustermesh-secrets\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627257 kubelet[2712]: I0130 14:23:30.627192 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed8d7370-6550-4947-adff-ed242ed94233-kube-proxy\") pod \"kube-proxy-64v8h\" (UID: \"ed8d7370-6550-4947-adff-ed242ed94233\") " pod="kube-system/kube-proxy-64v8h" Jan 30 14:23:30.627257 kubelet[2712]: I0130 14:23:30.627211 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-bpf-maps\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627393 kubelet[2712]: I0130 14:23:30.627230 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-etc-cni-netd\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.627393 kubelet[2712]: I0130 14:23:30.627252 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-config-path\") pod \"cilium-vf4p9\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " pod="kube-system/cilium-vf4p9" Jan 30 14:23:30.630062 kubelet[2712]: I0130 14:23:30.629510 2712 topology_manager.go:215] "Topology Admit Handler" podUID="2aeeaad1-925f-4992-ab03-0ac020930fce" podNamespace="kube-system" podName="cilium-operator-599987898-ldmdd" Jan 30 14:23:30.637958 systemd[1]: Created slice kubepods-besteffort-pod2aeeaad1_925f_4992_ab03_0ac020930fce.slice - libcontainer container kubepods-besteffort-pod2aeeaad1_925f_4992_ab03_0ac020930fce.slice. Jan 30 14:23:30.728635 kubelet[2712]: I0130 14:23:30.728588 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2aeeaad1-925f-4992-ab03-0ac020930fce-cilium-config-path\") pod \"cilium-operator-599987898-ldmdd\" (UID: \"2aeeaad1-925f-4992-ab03-0ac020930fce\") " pod="kube-system/cilium-operator-599987898-ldmdd" Jan 30 14:23:30.728635 kubelet[2712]: I0130 14:23:30.728637 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7ft\" (UniqueName: \"kubernetes.io/projected/2aeeaad1-925f-4992-ab03-0ac020930fce-kube-api-access-hj7ft\") pod \"cilium-operator-599987898-ldmdd\" (UID: \"2aeeaad1-925f-4992-ab03-0ac020930fce\") " pod="kube-system/cilium-operator-599987898-ldmdd" Jan 30 14:23:31.730146 kubelet[2712]: E0130 14:23:31.730063 2712 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:23:31.731053 kubelet[2712]: E0130 14:23:31.730227 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed8d7370-6550-4947-adff-ed242ed94233-kube-proxy podName:ed8d7370-6550-4947-adff-ed242ed94233 nodeName:}" failed. No retries permitted until 2025-01-30 14:23:32.230185447 +0000 UTC m=+15.584096136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ed8d7370-6550-4947-adff-ed242ed94233-kube-proxy") pod "kube-proxy-64v8h" (UID: "ed8d7370-6550-4947-adff-ed242ed94233") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:23:31.748470 kubelet[2712]: E0130 14:23:31.748415 2712 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:23:31.748470 kubelet[2712]: E0130 14:23:31.748471 2712 projected.go:200] Error preparing data for projected volume kube-api-access-2d6vr for pod kube-system/kube-proxy-64v8h: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:23:31.749133 kubelet[2712]: E0130 14:23:31.748626 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed8d7370-6550-4947-adff-ed242ed94233-kube-api-access-2d6vr podName:ed8d7370-6550-4947-adff-ed242ed94233 nodeName:}" failed. No retries permitted until 2025-01-30 14:23:32.24855085 +0000 UTC m=+15.602461538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2d6vr" (UniqueName: "kubernetes.io/projected/ed8d7370-6550-4947-adff-ed242ed94233-kube-api-access-2d6vr") pod "kube-proxy-64v8h" (UID: "ed8d7370-6550-4947-adff-ed242ed94233") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:23:31.749133 kubelet[2712]: E0130 14:23:31.748779 2712 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:23:31.749133 kubelet[2712]: E0130 14:23:31.748833 2712 projected.go:200] Error preparing data for projected volume kube-api-access-ccnd4 for pod kube-system/cilium-vf4p9: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:23:31.749133 kubelet[2712]: E0130 14:23:31.748959 2712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-kube-api-access-ccnd4 podName:af0ddd45-8ee5-4e7d-a546-0b8226ca1f83 nodeName:}" failed. No retries permitted until 2025-01-30 14:23:32.248894474 +0000 UTC m=+15.602805162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ccnd4" (UniqueName: "kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-kube-api-access-ccnd4") pod "cilium-vf4p9" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:23:31.842518 containerd[1456]: time="2025-01-30T14:23:31.842413178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ldmdd,Uid:2aeeaad1-925f-4992-ab03-0ac020930fce,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:31.922780 containerd[1456]: time="2025-01-30T14:23:31.922265351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:31.922780 containerd[1456]: time="2025-01-30T14:23:31.922396277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:31.922780 containerd[1456]: time="2025-01-30T14:23:31.922441061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:31.922780 containerd[1456]: time="2025-01-30T14:23:31.922641978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:31.956730 systemd[1]: Started cri-containerd-1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b.scope - libcontainer container 1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b. Jan 30 14:23:31.992512 containerd[1456]: time="2025-01-30T14:23:31.992416281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ldmdd,Uid:2aeeaad1-925f-4992-ab03-0ac020930fce,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b\"" Jan 30 14:23:31.994712 containerd[1456]: time="2025-01-30T14:23:31.994463872Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 14:23:32.369521 containerd[1456]: time="2025-01-30T14:23:32.369269009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-64v8h,Uid:ed8d7370-6550-4947-adff-ed242ed94233,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:32.387721 containerd[1456]: time="2025-01-30T14:23:32.387088848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vf4p9,Uid:af0ddd45-8ee5-4e7d-a546-0b8226ca1f83,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:32.427556 containerd[1456]: time="2025-01-30T14:23:32.427146432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:32.427556 containerd[1456]: time="2025-01-30T14:23:32.427245848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:32.427556 containerd[1456]: time="2025-01-30T14:23:32.427282357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:32.427556 containerd[1456]: time="2025-01-30T14:23:32.427413893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:32.445465 containerd[1456]: time="2025-01-30T14:23:32.443776439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:32.445465 containerd[1456]: time="2025-01-30T14:23:32.443825722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:32.445465 containerd[1456]: time="2025-01-30T14:23:32.443839057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:32.445465 containerd[1456]: time="2025-01-30T14:23:32.443918726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:32.451763 systemd[1]: Started cri-containerd-127c67689ccc256403f23e68822ea691da43fd2d4c8fe1fcb74b75bf63477cff.scope - libcontainer container 127c67689ccc256403f23e68822ea691da43fd2d4c8fe1fcb74b75bf63477cff. Jan 30 14:23:32.471711 systemd[1]: Started cri-containerd-f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25.scope - libcontainer container f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25. Jan 30 14:23:32.496282 containerd[1456]: time="2025-01-30T14:23:32.496133069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-64v8h,Uid:ed8d7370-6550-4947-adff-ed242ed94233,Namespace:kube-system,Attempt:0,} returns sandbox id \"127c67689ccc256403f23e68822ea691da43fd2d4c8fe1fcb74b75bf63477cff\"" Jan 30 14:23:32.499899 containerd[1456]: time="2025-01-30T14:23:32.499762655Z" level=info msg="CreateContainer within sandbox \"127c67689ccc256403f23e68822ea691da43fd2d4c8fe1fcb74b75bf63477cff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:23:32.510222 containerd[1456]: time="2025-01-30T14:23:32.510177427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vf4p9,Uid:af0ddd45-8ee5-4e7d-a546-0b8226ca1f83,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\"" Jan 30 14:23:32.526787 containerd[1456]: time="2025-01-30T14:23:32.526693010Z" level=info msg="CreateContainer within sandbox \"127c67689ccc256403f23e68822ea691da43fd2d4c8fe1fcb74b75bf63477cff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf506e66ae6ac29cc6892a9c4aac59ca12932c4398de5fcd05684310f5f9cf96\"" Jan 30 14:23:32.527524 containerd[1456]: time="2025-01-30T14:23:32.527371292Z" level=info msg="StartContainer for \"cf506e66ae6ac29cc6892a9c4aac59ca12932c4398de5fcd05684310f5f9cf96\"" Jan 30 14:23:32.555726 systemd[1]: Started cri-containerd-cf506e66ae6ac29cc6892a9c4aac59ca12932c4398de5fcd05684310f5f9cf96.scope - libcontainer container cf506e66ae6ac29cc6892a9c4aac59ca12932c4398de5fcd05684310f5f9cf96. Jan 30 14:23:32.588212 containerd[1456]: time="2025-01-30T14:23:32.588170575Z" level=info msg="StartContainer for \"cf506e66ae6ac29cc6892a9c4aac59ca12932c4398de5fcd05684310f5f9cf96\" returns successfully" Jan 30 14:23:32.912527 kubelet[2712]: I0130 14:23:32.912422 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-64v8h" podStartSLOduration=2.912404541 podStartE2EDuration="2.912404541s" podCreationTimestamp="2025-01-30 14:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:32.911426427 +0000 UTC m=+16.265337075" watchObservedRunningTime="2025-01-30 14:23:32.912404541 +0000 UTC m=+16.266315189" Jan 30 14:23:33.905102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709820370.mount: Deactivated successfully. Jan 30 14:23:34.629348 containerd[1456]: time="2025-01-30T14:23:34.629281677Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:34.630683 containerd[1456]: time="2025-01-30T14:23:34.630644483Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 14:23:34.631447 containerd[1456]: time="2025-01-30T14:23:34.631403306Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:34.632994 containerd[1456]: time="2025-01-30T14:23:34.632847084Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.638348297s" Jan 30 14:23:34.632994 containerd[1456]: time="2025-01-30T14:23:34.632885015Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 14:23:34.635614 containerd[1456]: time="2025-01-30T14:23:34.634740255Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 14:23:34.635614 containerd[1456]: time="2025-01-30T14:23:34.635395363Z" level=info msg="CreateContainer within sandbox \"1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 14:23:34.661000 containerd[1456]: time="2025-01-30T14:23:34.660951150Z" level=info msg="CreateContainer within sandbox \"1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\"" Jan 30 14:23:34.661746 containerd[1456]: time="2025-01-30T14:23:34.661717898Z" level=info msg="StartContainer for \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\"" Jan 30 14:23:34.700736 systemd[1]: Started cri-containerd-c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609.scope - libcontainer container c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609. Jan 30 14:23:34.733058 containerd[1456]: time="2025-01-30T14:23:34.732853907Z" level=info msg="StartContainer for \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\" returns successfully" Jan 30 14:23:40.744550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641461386.mount: Deactivated successfully. Jan 30 14:23:43.343168 containerd[1456]: time="2025-01-30T14:23:43.343006321Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:43.344650 containerd[1456]: time="2025-01-30T14:23:43.344586294Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 14:23:43.346311 containerd[1456]: time="2025-01-30T14:23:43.346259492Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:23:43.348609 containerd[1456]: time="2025-01-30T14:23:43.348416897Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.713641075s" Jan 30 14:23:43.348609 containerd[1456]: time="2025-01-30T14:23:43.348446823Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 14:23:43.354455 containerd[1456]: time="2025-01-30T14:23:43.354387435Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:23:43.392435 containerd[1456]: time="2025-01-30T14:23:43.392353929Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\"" Jan 30 14:23:43.394127 containerd[1456]: time="2025-01-30T14:23:43.393849955Z" level=info msg="StartContainer for \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\"" Jan 30 14:23:43.442724 systemd[1]: Started cri-containerd-b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d.scope - libcontainer container b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d. Jan 30 14:23:43.478700 containerd[1456]: time="2025-01-30T14:23:43.478549794Z" level=info msg="StartContainer for \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\" returns successfully" Jan 30 14:23:43.485469 systemd[1]: cri-containerd-b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d.scope: Deactivated successfully. Jan 30 14:23:44.075047 kubelet[2712]: I0130 14:23:44.074868 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ldmdd" podStartSLOduration=11.434527695 podStartE2EDuration="14.074499788s" podCreationTimestamp="2025-01-30 14:23:30 +0000 UTC" firstStartedPulling="2025-01-30 14:23:31.994067909 +0000 UTC m=+15.347978557" lastFinishedPulling="2025-01-30 14:23:34.634040011 +0000 UTC m=+17.987950650" observedRunningTime="2025-01-30 14:23:34.949190025 +0000 UTC m=+18.303100693" watchObservedRunningTime="2025-01-30 14:23:44.074499788 +0000 UTC m=+27.428410476" Jan 30 14:23:44.337337 containerd[1456]: time="2025-01-30T14:23:44.337070616Z" level=info msg="shim disconnected" id=b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d namespace=k8s.io Jan 30 14:23:44.337337 containerd[1456]: time="2025-01-30T14:23:44.337193306Z" level=warning msg="cleaning up after shim disconnected" id=b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d namespace=k8s.io Jan 30 14:23:44.337337 containerd[1456]: time="2025-01-30T14:23:44.337216730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:23:44.379513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d-rootfs.mount: Deactivated successfully. Jan 30 14:23:44.977114 containerd[1456]: time="2025-01-30T14:23:44.976608238Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:23:45.017942 containerd[1456]: time="2025-01-30T14:23:45.017814879Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\"" Jan 30 14:23:45.021969 containerd[1456]: time="2025-01-30T14:23:45.021904187Z" level=info msg="StartContainer for \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\"" Jan 30 14:23:45.071720 systemd[1]: Started cri-containerd-54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb.scope - libcontainer container 54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb. Jan 30 14:23:45.105275 containerd[1456]: time="2025-01-30T14:23:45.105168015Z" level=info msg="StartContainer for \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\" returns successfully" Jan 30 14:23:45.111724 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:23:45.112560 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:23:45.112769 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:23:45.120020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:23:45.120257 systemd[1]: cri-containerd-54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb.scope: Deactivated successfully. Jan 30 14:23:45.140737 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:23:45.152272 containerd[1456]: time="2025-01-30T14:23:45.152163423Z" level=info msg="shim disconnected" id=54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb namespace=k8s.io Jan 30 14:23:45.152272 containerd[1456]: time="2025-01-30T14:23:45.152211022Z" level=warning msg="cleaning up after shim disconnected" id=54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb namespace=k8s.io Jan 30 14:23:45.152272 containerd[1456]: time="2025-01-30T14:23:45.152220259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:23:45.163406 containerd[1456]: time="2025-01-30T14:23:45.163357066Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:23:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:23:45.381075 systemd[1]: run-containerd-runc-k8s.io-54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb-runc.4TfSc3.mount: Deactivated successfully. Jan 30 14:23:45.381343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb-rootfs.mount: Deactivated successfully. Jan 30 14:23:45.982630 containerd[1456]: time="2025-01-30T14:23:45.981566249Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:23:46.024177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161792950.mount: Deactivated successfully. Jan 30 14:23:46.030293 containerd[1456]: time="2025-01-30T14:23:46.029494255Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\"" Jan 30 14:23:46.033304 containerd[1456]: time="2025-01-30T14:23:46.033194846Z" level=info msg="StartContainer for \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\"" Jan 30 14:23:46.090064 systemd[1]: Started cri-containerd-fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75.scope - libcontainer container fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75. Jan 30 14:23:46.121559 systemd[1]: cri-containerd-fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75.scope: Deactivated successfully. Jan 30 14:23:46.127555 containerd[1456]: time="2025-01-30T14:23:46.127258969Z" level=info msg="StartContainer for \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\" returns successfully" Jan 30 14:23:46.155103 containerd[1456]: time="2025-01-30T14:23:46.154945644Z" level=info msg="shim disconnected" id=fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75 namespace=k8s.io Jan 30 14:23:46.155103 containerd[1456]: time="2025-01-30T14:23:46.155006549Z" level=warning msg="cleaning up after shim disconnected" id=fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75 namespace=k8s.io Jan 30 14:23:46.155103 containerd[1456]: time="2025-01-30T14:23:46.155016618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:23:46.376429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75-rootfs.mount: Deactivated successfully. Jan 30 14:23:46.995052 containerd[1456]: time="2025-01-30T14:23:46.994273126Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:23:47.040752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160204493.mount: Deactivated successfully. Jan 30 14:23:47.063031 containerd[1456]: time="2025-01-30T14:23:47.062895465Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\"" Jan 30 14:23:47.065659 containerd[1456]: time="2025-01-30T14:23:47.065529706Z" level=info msg="StartContainer for \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\"" Jan 30 14:23:47.101761 systemd[1]: Started cri-containerd-a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6.scope - libcontainer container a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6. Jan 30 14:23:47.124697 systemd[1]: cri-containerd-a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6.scope: Deactivated successfully. Jan 30 14:23:47.130725 containerd[1456]: time="2025-01-30T14:23:47.130459719Z" level=info msg="StartContainer for \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\" returns successfully" Jan 30 14:23:47.157197 containerd[1456]: time="2025-01-30T14:23:47.157116533Z" level=info msg="shim disconnected" id=a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6 namespace=k8s.io Jan 30 14:23:47.157197 containerd[1456]: time="2025-01-30T14:23:47.157171826Z" level=warning msg="cleaning up after shim disconnected" id=a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6 namespace=k8s.io Jan 30 14:23:47.157197 containerd[1456]: time="2025-01-30T14:23:47.157183899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:23:47.376483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6-rootfs.mount: Deactivated successfully. Jan 30 14:23:47.998932 containerd[1456]: time="2025-01-30T14:23:47.998840782Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:23:48.048285 containerd[1456]: time="2025-01-30T14:23:48.042934819Z" level=info msg="CreateContainer within sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\"" Jan 30 14:23:48.048285 containerd[1456]: time="2025-01-30T14:23:48.045124325Z" level=info msg="StartContainer for \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\"" Jan 30 14:23:48.107727 systemd[1]: Started cri-containerd-829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125.scope - libcontainer container 829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125. Jan 30 14:23:48.142887 containerd[1456]: time="2025-01-30T14:23:48.142842652Z" level=info msg="StartContainer for \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\" returns successfully" Jan 30 14:23:48.223748 kubelet[2712]: I0130 14:23:48.223710 2712 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:23:48.255500 kubelet[2712]: I0130 14:23:48.255362 2712 topology_manager.go:215] "Topology Admit Handler" podUID="d35fdc33-6c31-4fa8-b923-645abdfc66b9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gn2zd" Jan 30 14:23:48.260884 kubelet[2712]: I0130 14:23:48.260410 2712 topology_manager.go:215] "Topology Admit Handler" podUID="ae1d1a42-1806-45fa-9d11-13bebc52131f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cwkdw" Jan 30 14:23:48.263565 kubelet[2712]: W0130 14:23:48.262760 2712 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186-1-0-5-d272c7c7c0.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-5-d272c7c7c0.novalocal' and this object Jan 30 14:23:48.263565 kubelet[2712]: E0130 14:23:48.262899 2712 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186-1-0-5-d272c7c7c0.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-5-d272c7c7c0.novalocal' and this object Jan 30 14:23:48.267911 systemd[1]: Created slice kubepods-burstable-podd35fdc33_6c31_4fa8_b923_645abdfc66b9.slice - libcontainer container kubepods-burstable-podd35fdc33_6c31_4fa8_b923_645abdfc66b9.slice. Jan 30 14:23:48.275868 systemd[1]: Created slice kubepods-burstable-podae1d1a42_1806_45fa_9d11_13bebc52131f.slice - libcontainer container kubepods-burstable-podae1d1a42_1806_45fa_9d11_13bebc52131f.slice. Jan 30 14:23:48.358220 kubelet[2712]: I0130 14:23:48.358188 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjms8\" (UniqueName: \"kubernetes.io/projected/ae1d1a42-1806-45fa-9d11-13bebc52131f-kube-api-access-mjms8\") pod \"coredns-7db6d8ff4d-cwkdw\" (UID: \"ae1d1a42-1806-45fa-9d11-13bebc52131f\") " pod="kube-system/coredns-7db6d8ff4d-cwkdw" Jan 30 14:23:48.358521 kubelet[2712]: I0130 14:23:48.358410 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae1d1a42-1806-45fa-9d11-13bebc52131f-config-volume\") pod \"coredns-7db6d8ff4d-cwkdw\" (UID: \"ae1d1a42-1806-45fa-9d11-13bebc52131f\") " pod="kube-system/coredns-7db6d8ff4d-cwkdw" Jan 30 14:23:48.358521 kubelet[2712]: I0130 14:23:48.358439 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d35fdc33-6c31-4fa8-b923-645abdfc66b9-config-volume\") pod \"coredns-7db6d8ff4d-gn2zd\" (UID: \"d35fdc33-6c31-4fa8-b923-645abdfc66b9\") " pod="kube-system/coredns-7db6d8ff4d-gn2zd" Jan 30 14:23:48.358521 kubelet[2712]: I0130 14:23:48.358490 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnthb\" (UniqueName: \"kubernetes.io/projected/d35fdc33-6c31-4fa8-b923-645abdfc66b9-kube-api-access-tnthb\") pod \"coredns-7db6d8ff4d-gn2zd\" (UID: \"d35fdc33-6c31-4fa8-b923-645abdfc66b9\") " pod="kube-system/coredns-7db6d8ff4d-gn2zd" Jan 30 14:23:49.039628 kubelet[2712]: I0130 14:23:49.039368 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vf4p9" podStartSLOduration=8.200203447 podStartE2EDuration="19.039329619s" podCreationTimestamp="2025-01-30 14:23:30 +0000 UTC" firstStartedPulling="2025-01-30 14:23:32.511903444 +0000 UTC m=+15.865814083" lastFinishedPulling="2025-01-30 14:23:43.351029607 +0000 UTC m=+26.704940255" observedRunningTime="2025-01-30 14:23:49.038712351 +0000 UTC m=+32.392623050" watchObservedRunningTime="2025-01-30 14:23:49.039329619 +0000 UTC m=+32.393240317" Jan 30 14:23:49.172661 containerd[1456]: time="2025-01-30T14:23:49.172620833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gn2zd,Uid:d35fdc33-6c31-4fa8-b923-645abdfc66b9,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:49.185217 containerd[1456]: time="2025-01-30T14:23:49.184852292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cwkdw,Uid:ae1d1a42-1806-45fa-9d11-13bebc52131f,Namespace:kube-system,Attempt:0,}" Jan 30 14:23:50.289834 systemd-networkd[1366]: cilium_host: Link UP Jan 30 14:23:50.292349 systemd-networkd[1366]: cilium_net: Link UP Jan 30 14:23:50.294263 systemd-networkd[1366]: cilium_net: Gained carrier Jan 30 14:23:50.295022 systemd-networkd[1366]: cilium_host: Gained carrier Jan 30 14:23:50.404407 systemd-networkd[1366]: cilium_vxlan: Link UP Jan 30 14:23:50.404414 systemd-networkd[1366]: cilium_vxlan: Gained carrier Jan 30 14:23:50.643730 kernel: NET: Registered PF_ALG protocol family Jan 30 14:23:51.091759 systemd-networkd[1366]: cilium_net: Gained IPv6LL Jan 30 14:23:51.283775 systemd-networkd[1366]: cilium_host: Gained IPv6LL Jan 30 14:23:51.400382 systemd-networkd[1366]: lxc_health: Link UP Jan 30 14:23:51.408984 systemd-networkd[1366]: lxc_health: Gained carrier Jan 30 14:23:51.790282 systemd-networkd[1366]: lxc9c432655f736: Link UP Jan 30 14:23:51.798230 kernel: eth0: renamed from tmp3acc9 Jan 30 14:23:51.804832 systemd-networkd[1366]: lxc33f7fc5d26c6: Link UP Jan 30 14:23:51.805301 systemd-networkd[1366]: lxc9c432655f736: Gained carrier Jan 30 14:23:51.810689 kernel: eth0: renamed from tmp1a2f7 Jan 30 14:23:51.819016 systemd-networkd[1366]: lxc33f7fc5d26c6: Gained carrier Jan 30 14:23:52.052763 systemd-networkd[1366]: cilium_vxlan: Gained IPv6LL Jan 30 14:23:52.627786 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 30 14:23:53.012017 systemd-networkd[1366]: lxc9c432655f736: Gained IPv6LL Jan 30 14:23:53.587924 systemd-networkd[1366]: lxc33f7fc5d26c6: Gained IPv6LL Jan 30 14:23:53.920709 kubelet[2712]: I0130 14:23:53.920350 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:23:56.373148 containerd[1456]: time="2025-01-30T14:23:56.372357917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:56.373148 containerd[1456]: time="2025-01-30T14:23:56.372427788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:56.373148 containerd[1456]: time="2025-01-30T14:23:56.372448137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:56.373148 containerd[1456]: time="2025-01-30T14:23:56.372535742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:56.388857 containerd[1456]: time="2025-01-30T14:23:56.388621099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:23:56.388857 containerd[1456]: time="2025-01-30T14:23:56.388707090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:23:56.388857 containerd[1456]: time="2025-01-30T14:23:56.388726927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:56.389195 containerd[1456]: time="2025-01-30T14:23:56.388825423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:23:56.416053 systemd[1]: Started cri-containerd-1a2f777b2aa862b778c271d5afa8db2eec77a7eeb942a10f53d3d563ce706a33.scope - libcontainer container 1a2f777b2aa862b778c271d5afa8db2eec77a7eeb942a10f53d3d563ce706a33. Jan 30 14:23:56.426974 systemd[1]: Started cri-containerd-3acc9f6c4b420c4bccc140ca45279174f9c17de42967704fca453bbfc951bfd3.scope - libcontainer container 3acc9f6c4b420c4bccc140ca45279174f9c17de42967704fca453bbfc951bfd3. Jan 30 14:23:56.473543 containerd[1456]: time="2025-01-30T14:23:56.473437316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gn2zd,Uid:d35fdc33-6c31-4fa8-b923-645abdfc66b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3acc9f6c4b420c4bccc140ca45279174f9c17de42967704fca453bbfc951bfd3\"" Jan 30 14:23:56.478289 containerd[1456]: time="2025-01-30T14:23:56.477951471Z" level=info msg="CreateContainer within sandbox \"3acc9f6c4b420c4bccc140ca45279174f9c17de42967704fca453bbfc951bfd3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:23:56.510446 containerd[1456]: time="2025-01-30T14:23:56.510343362Z" level=info msg="CreateContainer within sandbox \"3acc9f6c4b420c4bccc140ca45279174f9c17de42967704fca453bbfc951bfd3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ec46eee4bd7512c96f70a0f1ca4ed38e0699d1534e2731783ed52a23aed4d4c\"" Jan 30 14:23:56.513797 containerd[1456]: time="2025-01-30T14:23:56.511080127Z" level=info msg="StartContainer for \"1ec46eee4bd7512c96f70a0f1ca4ed38e0699d1534e2731783ed52a23aed4d4c\"" Jan 30 14:23:56.520222 containerd[1456]: time="2025-01-30T14:23:56.520188178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cwkdw,Uid:ae1d1a42-1806-45fa-9d11-13bebc52131f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a2f777b2aa862b778c271d5afa8db2eec77a7eeb942a10f53d3d563ce706a33\"" Jan 30 14:23:56.530585 containerd[1456]: time="2025-01-30T14:23:56.530530991Z" level=info msg="CreateContainer within sandbox \"1a2f777b2aa862b778c271d5afa8db2eec77a7eeb942a10f53d3d563ce706a33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:23:56.552740 systemd[1]: Started cri-containerd-1ec46eee4bd7512c96f70a0f1ca4ed38e0699d1534e2731783ed52a23aed4d4c.scope - libcontainer container 1ec46eee4bd7512c96f70a0f1ca4ed38e0699d1534e2731783ed52a23aed4d4c. Jan 30 14:23:56.569196 containerd[1456]: time="2025-01-30T14:23:56.568085685Z" level=info msg="CreateContainer within sandbox \"1a2f777b2aa862b778c271d5afa8db2eec77a7eeb942a10f53d3d563ce706a33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c84e754df7fe9d624bb1ddb2daea46c280d4641fdee9aa5be7af087f58a7417c\"" Jan 30 14:23:56.570268 containerd[1456]: time="2025-01-30T14:23:56.570238212Z" level=info msg="StartContainer for \"c84e754df7fe9d624bb1ddb2daea46c280d4641fdee9aa5be7af087f58a7417c\"" Jan 30 14:23:56.603439 systemd[1]: Started cri-containerd-c84e754df7fe9d624bb1ddb2daea46c280d4641fdee9aa5be7af087f58a7417c.scope - libcontainer container c84e754df7fe9d624bb1ddb2daea46c280d4641fdee9aa5be7af087f58a7417c. Jan 30 14:23:56.605278 containerd[1456]: time="2025-01-30T14:23:56.605250297Z" level=info msg="StartContainer for \"1ec46eee4bd7512c96f70a0f1ca4ed38e0699d1534e2731783ed52a23aed4d4c\" returns successfully" Jan 30 14:23:56.641519 containerd[1456]: time="2025-01-30T14:23:56.640550895Z" level=info msg="StartContainer for \"c84e754df7fe9d624bb1ddb2daea46c280d4641fdee9aa5be7af087f58a7417c\" returns successfully" Jan 30 14:23:57.051017 kubelet[2712]: I0130 14:23:57.050893 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cwkdw" podStartSLOduration=27.050863102 podStartE2EDuration="27.050863102s" podCreationTimestamp="2025-01-30 14:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:57.046251134 +0000 UTC m=+40.400161822" watchObservedRunningTime="2025-01-30 14:23:57.050863102 +0000 UTC m=+40.404773780" Jan 30 14:23:57.076259 kubelet[2712]: I0130 14:23:57.076161 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gn2zd" podStartSLOduration=27.076133242 podStartE2EDuration="27.076133242s" podCreationTimestamp="2025-01-30 14:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:23:57.073438347 +0000 UTC m=+40.427349025" watchObservedRunningTime="2025-01-30 14:23:57.076133242 +0000 UTC m=+40.430043920" Jan 30 14:23:57.387345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177800960.mount: Deactivated successfully. Jan 30 14:24:42.096366 systemd[1]: Started sshd@9-172.24.4.105:22-172.24.4.1:46946.service - OpenSSH per-connection server daemon (172.24.4.1:46946). Jan 30 14:24:43.583162 sshd[4081]: Accepted publickey for core from 172.24.4.1 port 46946 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:24:43.586195 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:24:43.599936 systemd-logind[1442]: New session 12 of user core. Jan 30 14:24:43.606101 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:24:44.332206 sshd[4083]: Connection closed by 172.24.4.1 port 46946 Jan 30 14:24:44.333362 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jan 30 14:24:44.339029 systemd[1]: sshd@9-172.24.4.105:22-172.24.4.1:46946.service: Deactivated successfully. Jan 30 14:24:44.341820 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:24:44.344786 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:24:44.346678 systemd-logind[1442]: Removed session 12. Jan 30 14:24:49.354004 systemd[1]: Started sshd@10-172.24.4.105:22-172.24.4.1:54192.service - OpenSSH per-connection server daemon (172.24.4.1:54192). Jan 30 14:24:50.581124 sshd[4095]: Accepted publickey for core from 172.24.4.1 port 54192 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:24:50.583818 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:24:50.594801 systemd-logind[1442]: New session 13 of user core. Jan 30 14:24:50.603895 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:24:51.370474 sshd[4097]: Connection closed by 172.24.4.1 port 54192 Jan 30 14:24:51.370270 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Jan 30 14:24:51.375864 systemd[1]: sshd@10-172.24.4.105:22-172.24.4.1:54192.service: Deactivated successfully. Jan 30 14:24:51.377853 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:24:51.380242 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:24:51.381743 systemd-logind[1442]: Removed session 13. Jan 30 14:24:56.401122 systemd[1]: Started sshd@11-172.24.4.105:22-172.24.4.1:50994.service - OpenSSH per-connection server daemon (172.24.4.1:50994). Jan 30 14:24:57.780265 sshd[4110]: Accepted publickey for core from 172.24.4.1 port 50994 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:24:57.783037 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:24:57.794086 systemd-logind[1442]: New session 14 of user core. Jan 30 14:24:57.799951 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:24:58.575394 sshd[4112]: Connection closed by 172.24.4.1 port 50994 Jan 30 14:24:58.576201 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jan 30 14:24:58.582489 systemd[1]: sshd@11-172.24.4.105:22-172.24.4.1:50994.service: Deactivated successfully. Jan 30 14:24:58.586149 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:24:58.590365 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:24:58.593224 systemd-logind[1442]: Removed session 14. Jan 30 14:25:03.604210 systemd[1]: Started sshd@12-172.24.4.105:22-172.24.4.1:39148.service - OpenSSH per-connection server daemon (172.24.4.1:39148). Jan 30 14:25:04.849275 sshd[4125]: Accepted publickey for core from 172.24.4.1 port 39148 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:04.852088 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:04.863825 systemd-logind[1442]: New session 15 of user core. Jan 30 14:25:04.875949 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:25:05.614333 sshd[4127]: Connection closed by 172.24.4.1 port 39148 Jan 30 14:25:05.615115 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:05.624697 systemd[1]: sshd@12-172.24.4.105:22-172.24.4.1:39148.service: Deactivated successfully. Jan 30 14:25:05.628006 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:25:05.629904 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:25:05.638250 systemd[1]: Started sshd@13-172.24.4.105:22-172.24.4.1:39164.service - OpenSSH per-connection server daemon (172.24.4.1:39164). Jan 30 14:25:05.640958 systemd-logind[1442]: Removed session 15. Jan 30 14:25:06.979851 sshd[4139]: Accepted publickey for core from 172.24.4.1 port 39164 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:06.982473 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:06.991999 systemd-logind[1442]: New session 16 of user core. Jan 30 14:25:06.997878 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:25:07.953288 sshd[4141]: Connection closed by 172.24.4.1 port 39164 Jan 30 14:25:07.953919 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:07.970197 systemd[1]: sshd@13-172.24.4.105:22-172.24.4.1:39164.service: Deactivated successfully. Jan 30 14:25:07.976723 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:25:07.981060 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:25:07.988182 systemd[1]: Started sshd@14-172.24.4.105:22-172.24.4.1:39170.service - OpenSSH per-connection server daemon (172.24.4.1:39170). Jan 30 14:25:07.991986 systemd-logind[1442]: Removed session 16. Jan 30 14:25:09.328913 sshd[4150]: Accepted publickey for core from 172.24.4.1 port 39170 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:09.331782 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:09.341271 systemd-logind[1442]: New session 17 of user core. Jan 30 14:25:09.348899 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:25:10.119308 sshd[4152]: Connection closed by 172.24.4.1 port 39170 Jan 30 14:25:10.120531 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:10.127046 systemd[1]: sshd@14-172.24.4.105:22-172.24.4.1:39170.service: Deactivated successfully. Jan 30 14:25:10.132567 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:25:10.136323 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:25:10.139025 systemd-logind[1442]: Removed session 17. Jan 30 14:25:15.149381 systemd[1]: Started sshd@15-172.24.4.105:22-172.24.4.1:50554.service - OpenSSH per-connection server daemon (172.24.4.1:50554). Jan 30 14:25:16.447097 sshd[4163]: Accepted publickey for core from 172.24.4.1 port 50554 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:16.449906 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:16.460913 systemd-logind[1442]: New session 18 of user core. Jan 30 14:25:16.465870 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:25:17.181780 sshd[4165]: Connection closed by 172.24.4.1 port 50554 Jan 30 14:25:17.183525 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:17.192515 systemd[1]: sshd@15-172.24.4.105:22-172.24.4.1:50554.service: Deactivated successfully. Jan 30 14:25:17.195477 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:25:17.197686 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:25:17.204118 systemd[1]: Started sshd@16-172.24.4.105:22-172.24.4.1:50562.service - OpenSSH per-connection server daemon (172.24.4.1:50562). Jan 30 14:25:17.206025 systemd-logind[1442]: Removed session 18. Jan 30 14:25:18.542660 sshd[4178]: Accepted publickey for core from 172.24.4.1 port 50562 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:18.545302 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:18.556458 systemd-logind[1442]: New session 19 of user core. Jan 30 14:25:18.565004 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:25:19.431993 sshd[4180]: Connection closed by 172.24.4.1 port 50562 Jan 30 14:25:19.438072 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:19.447196 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:25:19.447811 systemd[1]: sshd@16-172.24.4.105:22-172.24.4.1:50562.service: Deactivated successfully. Jan 30 14:25:19.451940 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:25:19.462211 systemd[1]: Started sshd@17-172.24.4.105:22-172.24.4.1:50566.service - OpenSSH per-connection server daemon (172.24.4.1:50566). Jan 30 14:25:19.465273 systemd-logind[1442]: Removed session 19. Jan 30 14:25:20.762762 sshd[4189]: Accepted publickey for core from 172.24.4.1 port 50566 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:20.765466 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:20.776161 systemd-logind[1442]: New session 20 of user core. Jan 30 14:25:20.788911 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:25:23.181883 sshd[4191]: Connection closed by 172.24.4.1 port 50566 Jan 30 14:25:23.183225 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:23.194832 systemd[1]: sshd@17-172.24.4.105:22-172.24.4.1:50566.service: Deactivated successfully. Jan 30 14:25:23.198379 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:25:23.203323 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:25:23.211272 systemd[1]: Started sshd@18-172.24.4.105:22-172.24.4.1:50580.service - OpenSSH per-connection server daemon (172.24.4.1:50580). Jan 30 14:25:23.215623 systemd-logind[1442]: Removed session 20. Jan 30 14:25:24.649518 sshd[4207]: Accepted publickey for core from 172.24.4.1 port 50580 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:24.652344 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:24.662708 systemd-logind[1442]: New session 21 of user core. Jan 30 14:25:24.671909 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:25:25.668300 sshd[4209]: Connection closed by 172.24.4.1 port 50580 Jan 30 14:25:25.670744 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:25.680609 systemd[1]: sshd@18-172.24.4.105:22-172.24.4.1:50580.service: Deactivated successfully. Jan 30 14:25:25.685707 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:25:25.687894 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:25:25.698198 systemd[1]: Started sshd@19-172.24.4.105:22-172.24.4.1:59032.service - OpenSSH per-connection server daemon (172.24.4.1:59032). Jan 30 14:25:25.701532 systemd-logind[1442]: Removed session 21. Jan 30 14:25:26.871740 sshd[4218]: Accepted publickey for core from 172.24.4.1 port 59032 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:26.874329 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:26.885692 systemd-logind[1442]: New session 22 of user core. Jan 30 14:25:26.890873 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:25:27.579633 sshd[4220]: Connection closed by 172.24.4.1 port 59032 Jan 30 14:25:27.578983 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:27.585190 systemd[1]: sshd@19-172.24.4.105:22-172.24.4.1:59032.service: Deactivated successfully. Jan 30 14:25:27.591267 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:25:27.595988 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:25:27.598562 systemd-logind[1442]: Removed session 22. Jan 30 14:25:32.603206 systemd[1]: Started sshd@20-172.24.4.105:22-172.24.4.1:59034.service - OpenSSH per-connection server daemon (172.24.4.1:59034). Jan 30 14:25:33.726729 sshd[4235]: Accepted publickey for core from 172.24.4.1 port 59034 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:33.729422 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:33.741274 systemd-logind[1442]: New session 23 of user core. Jan 30 14:25:33.748970 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:25:34.322214 sshd[4239]: Connection closed by 172.24.4.1 port 59034 Jan 30 14:25:34.322773 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:34.330126 systemd[1]: sshd@20-172.24.4.105:22-172.24.4.1:59034.service: Deactivated successfully. Jan 30 14:25:34.334113 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:25:34.337962 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:25:34.340626 systemd-logind[1442]: Removed session 23. Jan 30 14:25:39.349051 systemd[1]: Started sshd@21-172.24.4.105:22-172.24.4.1:41696.service - OpenSSH per-connection server daemon (172.24.4.1:41696). Jan 30 14:25:40.796210 sshd[4249]: Accepted publickey for core from 172.24.4.1 port 41696 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:40.798515 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:40.807638 systemd-logind[1442]: New session 24 of user core. Jan 30 14:25:40.810823 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:25:41.506831 sshd[4252]: Connection closed by 172.24.4.1 port 41696 Jan 30 14:25:41.506962 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:41.518776 systemd[1]: sshd@21-172.24.4.105:22-172.24.4.1:41696.service: Deactivated successfully. Jan 30 14:25:41.522684 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:25:41.525182 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:25:41.533223 systemd[1]: Started sshd@22-172.24.4.105:22-172.24.4.1:41712.service - OpenSSH per-connection server daemon (172.24.4.1:41712). Jan 30 14:25:41.536203 systemd-logind[1442]: Removed session 24. Jan 30 14:25:42.695495 sshd[4263]: Accepted publickey for core from 172.24.4.1 port 41712 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:42.698445 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:42.707692 systemd-logind[1442]: New session 25 of user core. Jan 30 14:25:42.717905 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:25:44.466296 containerd[1456]: time="2025-01-30T14:25:44.465899767Z" level=info msg="StopContainer for \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\" with timeout 30 (s)" Jan 30 14:25:44.467359 containerd[1456]: time="2025-01-30T14:25:44.466848457Z" level=info msg="Stop container \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\" with signal terminated" Jan 30 14:25:44.474373 containerd[1456]: time="2025-01-30T14:25:44.474296992Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:25:44.481800 systemd[1]: cri-containerd-c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609.scope: Deactivated successfully. Jan 30 14:25:44.482917 containerd[1456]: time="2025-01-30T14:25:44.482760080Z" level=info msg="StopContainer for \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\" with timeout 2 (s)" Jan 30 14:25:44.483234 containerd[1456]: time="2025-01-30T14:25:44.483202631Z" level=info msg="Stop container \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\" with signal terminated" Jan 30 14:25:44.497346 systemd-networkd[1366]: lxc_health: Link DOWN Jan 30 14:25:44.497633 systemd-networkd[1366]: lxc_health: Lost carrier Jan 30 14:25:44.516438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609-rootfs.mount: Deactivated successfully. Jan 30 14:25:44.518910 systemd[1]: cri-containerd-829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125.scope: Deactivated successfully. Jan 30 14:25:44.519096 systemd[1]: cri-containerd-829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125.scope: Consumed 8.239s CPU time. Jan 30 14:25:44.549229 containerd[1456]: time="2025-01-30T14:25:44.548659901Z" level=info msg="shim disconnected" id=c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609 namespace=k8s.io Jan 30 14:25:44.549229 containerd[1456]: time="2025-01-30T14:25:44.548763655Z" level=warning msg="cleaning up after shim disconnected" id=c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609 namespace=k8s.io Jan 30 14:25:44.549229 containerd[1456]: time="2025-01-30T14:25:44.548780237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:25:44.549874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125-rootfs.mount: Deactivated successfully. Jan 30 14:25:44.555800 containerd[1456]: time="2025-01-30T14:25:44.555661456Z" level=info msg="shim disconnected" id=829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125 namespace=k8s.io Jan 30 14:25:44.556087 containerd[1456]: time="2025-01-30T14:25:44.556023907Z" level=warning msg="cleaning up after shim disconnected" id=829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125 namespace=k8s.io Jan 30 14:25:44.556087 containerd[1456]: time="2025-01-30T14:25:44.556040699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:25:44.568276 containerd[1456]: time="2025-01-30T14:25:44.567970601Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:25:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:25:44.581999 containerd[1456]: time="2025-01-30T14:25:44.581946622Z" level=info msg="StopContainer for \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\" returns successfully" Jan 30 14:25:44.582611 containerd[1456]: time="2025-01-30T14:25:44.582560474Z" level=info msg="StopContainer for \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\" returns successfully" Jan 30 14:25:44.582913 containerd[1456]: time="2025-01-30T14:25:44.582798120Z" level=info msg="StopPodSandbox for \"1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b\"" Jan 30 14:25:44.582913 containerd[1456]: time="2025-01-30T14:25:44.582827535Z" level=info msg="Container to stop \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:25:44.584951 containerd[1456]: time="2025-01-30T14:25:44.582852151Z" level=info msg="StopPodSandbox for \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\"" Jan 30 14:25:44.585141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b-shm.mount: Deactivated successfully. Jan 30 14:25:44.586402 containerd[1456]: time="2025-01-30T14:25:44.584951982Z" level=info msg="Container to stop \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:25:44.586402 containerd[1456]: time="2025-01-30T14:25:44.586340476Z" level=info msg="Container to stop \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:25:44.586689 containerd[1456]: time="2025-01-30T14:25:44.586353220Z" level=info msg="Container to stop \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:25:44.586689 containerd[1456]: time="2025-01-30T14:25:44.586548567Z" level=info msg="Container to stop \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:25:44.586689 containerd[1456]: time="2025-01-30T14:25:44.586561331Z" level=info msg="Container to stop \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:25:44.593490 systemd[1]: cri-containerd-1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b.scope: Deactivated successfully. Jan 30 14:25:44.595075 systemd[1]: cri-containerd-f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25.scope: Deactivated successfully. Jan 30 14:25:44.641358 containerd[1456]: time="2025-01-30T14:25:44.641300120Z" level=info msg="shim disconnected" id=1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b namespace=k8s.io Jan 30 14:25:44.641358 containerd[1456]: time="2025-01-30T14:25:44.641353631Z" level=warning msg="cleaning up after shim disconnected" id=1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b namespace=k8s.io Jan 30 14:25:44.641358 containerd[1456]: time="2025-01-30T14:25:44.641363038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:25:44.643121 containerd[1456]: time="2025-01-30T14:25:44.643075511Z" level=info msg="shim disconnected" id=f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25 namespace=k8s.io Jan 30 14:25:44.643257 containerd[1456]: time="2025-01-30T14:25:44.643241293Z" level=warning msg="cleaning up after shim disconnected" id=f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25 namespace=k8s.io Jan 30 14:25:44.643366 containerd[1456]: time="2025-01-30T14:25:44.643305043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:25:44.659149 containerd[1456]: time="2025-01-30T14:25:44.659100307Z" level=info msg="TearDown network for sandbox \"1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b\" successfully" Jan 30 14:25:44.659149 containerd[1456]: time="2025-01-30T14:25:44.659137747Z" level=info msg="StopPodSandbox for \"1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b\" returns successfully" Jan 30 14:25:44.663016 containerd[1456]: time="2025-01-30T14:25:44.662981098Z" level=info msg="TearDown network for sandbox \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" successfully" Jan 30 14:25:44.663016 containerd[1456]: time="2025-01-30T14:25:44.663010994Z" level=info msg="StopPodSandbox for \"f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25\" returns successfully" Jan 30 14:25:44.677656 kubelet[2712]: I0130 14:25:44.677618 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2aeeaad1-925f-4992-ab03-0ac020930fce-cilium-config-path\") pod \"2aeeaad1-925f-4992-ab03-0ac020930fce\" (UID: \"2aeeaad1-925f-4992-ab03-0ac020930fce\") " Jan 30 14:25:44.677656 kubelet[2712]: I0130 14:25:44.677659 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj7ft\" (UniqueName: \"kubernetes.io/projected/2aeeaad1-925f-4992-ab03-0ac020930fce-kube-api-access-hj7ft\") pod \"2aeeaad1-925f-4992-ab03-0ac020930fce\" (UID: \"2aeeaad1-925f-4992-ab03-0ac020930fce\") " Jan 30 14:25:44.678628 kubelet[2712]: I0130 14:25:44.678606 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aeeaad1-925f-4992-ab03-0ac020930fce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2aeeaad1-925f-4992-ab03-0ac020930fce" (UID: "2aeeaad1-925f-4992-ab03-0ac020930fce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:25:44.679789 kubelet[2712]: I0130 14:25:44.679744 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aeeaad1-925f-4992-ab03-0ac020930fce-kube-api-access-hj7ft" (OuterVolumeSpecName: "kube-api-access-hj7ft") pod "2aeeaad1-925f-4992-ab03-0ac020930fce" (UID: "2aeeaad1-925f-4992-ab03-0ac020930fce"). InnerVolumeSpecName "kube-api-access-hj7ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:44.778936 kubelet[2712]: I0130 14:25:44.778337 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-lib-modules\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.778936 kubelet[2712]: I0130 14:25:44.778382 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-clustermesh-secrets\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.778936 kubelet[2712]: I0130 14:25:44.778401 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-etc-cni-netd\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.778936 kubelet[2712]: I0130 14:25:44.778423 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-hubble-tls\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.778936 kubelet[2712]: I0130 14:25:44.778443 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-xtables-lock\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.778936 kubelet[2712]: I0130 14:25:44.778460 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-cgroup\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779186 kubelet[2712]: I0130 14:25:44.778477 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-host-proc-sys-net\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779186 kubelet[2712]: I0130 14:25:44.778494 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-host-proc-sys-kernel\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779186 kubelet[2712]: I0130 14:25:44.778515 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-config-path\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779186 kubelet[2712]: I0130 14:25:44.778533 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cni-path\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779186 kubelet[2712]: I0130 14:25:44.778550 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-run\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779186 kubelet[2712]: I0130 14:25:44.778586 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccnd4\" (UniqueName: \"kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-kube-api-access-ccnd4\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779342 kubelet[2712]: I0130 14:25:44.778611 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-hostproc\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779342 kubelet[2712]: I0130 14:25:44.778627 2712 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-bpf-maps\") pod \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\" (UID: \"af0ddd45-8ee5-4e7d-a546-0b8226ca1f83\") " Jan 30 14:25:44.779342 kubelet[2712]: I0130 14:25:44.778657 2712 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2aeeaad1-925f-4992-ab03-0ac020930fce-cilium-config-path\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.779342 kubelet[2712]: I0130 14:25:44.778669 2712 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hj7ft\" (UniqueName: \"kubernetes.io/projected/2aeeaad1-925f-4992-ab03-0ac020930fce-kube-api-access-hj7ft\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.779342 kubelet[2712]: I0130 14:25:44.778696 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.779342 kubelet[2712]: I0130 14:25:44.778728 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.780548 kubelet[2712]: I0130 14:25:44.780529 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.780759 kubelet[2712]: I0130 14:25:44.780649 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.780759 kubelet[2712]: I0130 14:25:44.780690 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.780759 kubelet[2712]: I0130 14:25:44.780707 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.781049 kubelet[2712]: I0130 14:25:44.780939 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.781049 kubelet[2712]: I0130 14:25:44.781009 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.781124 kubelet[2712]: I0130 14:25:44.781046 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cni-path" (OuterVolumeSpecName: "cni-path") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.782120 kubelet[2712]: I0130 14:25:44.782059 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-hostproc" (OuterVolumeSpecName: "hostproc") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:25:44.783051 kubelet[2712]: I0130 14:25:44.782987 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:44.785384 kubelet[2712]: I0130 14:25:44.785321 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:25:44.785473 kubelet[2712]: I0130 14:25:44.785335 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:25:44.787181 kubelet[2712]: I0130 14:25:44.787099 2712 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-kube-api-access-ccnd4" (OuterVolumeSpecName: "kube-api-access-ccnd4") pod "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" (UID: "af0ddd45-8ee5-4e7d-a546-0b8226ca1f83"). InnerVolumeSpecName "kube-api-access-ccnd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:44.845167 systemd[1]: Removed slice kubepods-besteffort-pod2aeeaad1_925f_4992_ab03_0ac020930fce.slice - libcontainer container kubepods-besteffort-pod2aeeaad1_925f_4992_ab03_0ac020930fce.slice. Jan 30 14:25:44.849046 systemd[1]: Removed slice kubepods-burstable-podaf0ddd45_8ee5_4e7d_a546_0b8226ca1f83.slice - libcontainer container kubepods-burstable-podaf0ddd45_8ee5_4e7d_a546_0b8226ca1f83.slice. Jan 30 14:25:44.849939 systemd[1]: kubepods-burstable-podaf0ddd45_8ee5_4e7d_a546_0b8226ca1f83.slice: Consumed 8.316s CPU time. Jan 30 14:25:44.879522 kubelet[2712]: I0130 14:25:44.879475 2712 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-hubble-tls\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879522 kubelet[2712]: I0130 14:25:44.879512 2712 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-xtables-lock\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879522 kubelet[2712]: I0130 14:25:44.879524 2712 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-cgroup\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879796 kubelet[2712]: I0130 14:25:44.879534 2712 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-host-proc-sys-net\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879796 kubelet[2712]: I0130 14:25:44.879558 2712 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-config-path\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879796 kubelet[2712]: I0130 14:25:44.879568 2712 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cni-path\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879796 kubelet[2712]: I0130 14:25:44.879594 2712 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-host-proc-sys-kernel\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879796 kubelet[2712]: I0130 14:25:44.879604 2712 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-cilium-run\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879796 kubelet[2712]: I0130 14:25:44.879613 2712 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ccnd4\" (UniqueName: \"kubernetes.io/projected/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-kube-api-access-ccnd4\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.879796 kubelet[2712]: I0130 14:25:44.879623 2712 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-bpf-maps\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.880178 kubelet[2712]: I0130 14:25:44.879632 2712 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-hostproc\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.880178 kubelet[2712]: I0130 14:25:44.879641 2712 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-clustermesh-secrets\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.880178 kubelet[2712]: I0130 14:25:44.879649 2712 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-etc-cni-netd\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:44.880178 kubelet[2712]: I0130 14:25:44.879658 2712 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83-lib-modules\") on node \"ci-4186-1-0-5-d272c7c7c0.novalocal\" DevicePath \"\"" Jan 30 14:25:45.368079 kubelet[2712]: I0130 14:25:45.367840 2712 scope.go:117] "RemoveContainer" containerID="829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125" Jan 30 14:25:45.372795 containerd[1456]: time="2025-01-30T14:25:45.372362187Z" level=info msg="RemoveContainer for \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\"" Jan 30 14:25:45.391781 containerd[1456]: time="2025-01-30T14:25:45.391509170Z" level=info msg="RemoveContainer for \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\" returns successfully" Jan 30 14:25:45.392546 kubelet[2712]: I0130 14:25:45.392274 2712 scope.go:117] "RemoveContainer" containerID="a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6" Jan 30 14:25:45.396367 containerd[1456]: time="2025-01-30T14:25:45.395806944Z" level=info msg="RemoveContainer for \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\"" Jan 30 14:25:45.402098 containerd[1456]: time="2025-01-30T14:25:45.401901388Z" level=info msg="RemoveContainer for \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\" returns successfully" Jan 30 14:25:45.402928 kubelet[2712]: I0130 14:25:45.402858 2712 scope.go:117] "RemoveContainer" containerID="fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75" Jan 30 14:25:45.406703 containerd[1456]: time="2025-01-30T14:25:45.406629489Z" level=info msg="RemoveContainer for \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\"" Jan 30 14:25:45.415399 containerd[1456]: time="2025-01-30T14:25:45.415285539Z" level=info msg="RemoveContainer for \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\" returns successfully" Jan 30 14:25:45.415962 kubelet[2712]: I0130 14:25:45.415921 2712 scope.go:117] "RemoveContainer" containerID="54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb" Jan 30 14:25:45.419412 containerd[1456]: time="2025-01-30T14:25:45.419335197Z" level=info msg="RemoveContainer for \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\"" Jan 30 14:25:45.428890 containerd[1456]: time="2025-01-30T14:25:45.428682284Z" level=info msg="RemoveContainer for \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\" returns successfully" Jan 30 14:25:45.429538 kubelet[2712]: I0130 14:25:45.429188 2712 scope.go:117] "RemoveContainer" containerID="b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d" Jan 30 14:25:45.432555 containerd[1456]: time="2025-01-30T14:25:45.431806376Z" level=info msg="RemoveContainer for \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\"" Jan 30 14:25:45.437815 containerd[1456]: time="2025-01-30T14:25:45.437757251Z" level=info msg="RemoveContainer for \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\" returns successfully" Jan 30 14:25:45.438237 kubelet[2712]: I0130 14:25:45.438204 2712 scope.go:117] "RemoveContainer" containerID="829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125" Jan 30 14:25:45.438789 containerd[1456]: time="2025-01-30T14:25:45.438643002Z" level=error msg="ContainerStatus for \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\": not found" Jan 30 14:25:45.438948 kubelet[2712]: E0130 14:25:45.438837 2712 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\": not found" containerID="829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125" Jan 30 14:25:45.438948 kubelet[2712]: I0130 14:25:45.438876 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125"} err="failed to get container status \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\": rpc error: code = NotFound desc = an error occurred when try to find container \"829cadf6e41d54e3731dabcbf44aab37ac31abb532657c046b02fb472bb20125\": not found" Jan 30 14:25:45.439194 kubelet[2712]: I0130 14:25:45.438957 2712 scope.go:117] "RemoveContainer" containerID="a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6" Jan 30 14:25:45.439194 kubelet[2712]: E0130 14:25:45.439185 2712 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\": not found" containerID="a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6" Jan 30 14:25:45.439328 containerd[1456]: time="2025-01-30T14:25:45.439093588Z" level=error msg="ContainerStatus for \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\": not found" Jan 30 14:25:45.439404 kubelet[2712]: I0130 14:25:45.439207 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6"} err="failed to get container status \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a07411a5ad4f488ab28e8c6d975456f6efa070baaf182b5b759b792dcbe1e7a6\": not found" Jan 30 14:25:45.439404 kubelet[2712]: I0130 14:25:45.439223 2712 scope.go:117] "RemoveContainer" containerID="fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75" Jan 30 14:25:45.439518 containerd[1456]: time="2025-01-30T14:25:45.439443143Z" level=error msg="ContainerStatus for \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\": not found" Jan 30 14:25:45.439659 kubelet[2712]: E0130 14:25:45.439557 2712 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\": not found" containerID="fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75" Jan 30 14:25:45.439659 kubelet[2712]: I0130 14:25:45.439641 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75"} err="failed to get container status \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe77ad41bff40e88ada6f91b19303ea4c6adcd3d8aed0579529804c6b2bf6b75\": not found" Jan 30 14:25:45.439659 kubelet[2712]: I0130 14:25:45.439660 2712 scope.go:117] "RemoveContainer" containerID="54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb" Jan 30 14:25:45.441892 kubelet[2712]: E0130 14:25:45.440200 2712 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\": not found" containerID="54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb" Jan 30 14:25:45.441892 kubelet[2712]: I0130 14:25:45.440221 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb"} err="failed to get container status \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\": rpc error: code = NotFound desc = an error occurred when try to find container \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\": not found" Jan 30 14:25:45.441892 kubelet[2712]: I0130 14:25:45.440236 2712 scope.go:117] "RemoveContainer" containerID="b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d" Jan 30 14:25:45.442145 containerd[1456]: time="2025-01-30T14:25:45.439961646Z" level=error msg="ContainerStatus for \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54ace5db2204e2078c55f07502395374562f6031a6b6d73519315979fcfc8aeb\": not found" Jan 30 14:25:45.442145 containerd[1456]: time="2025-01-30T14:25:45.441742727Z" level=error msg="ContainerStatus for \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\": not found" Jan 30 14:25:45.442319 kubelet[2712]: E0130 14:25:45.441972 2712 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\": not found" containerID="b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d" Jan 30 14:25:45.442319 kubelet[2712]: I0130 14:25:45.442127 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d"} err="failed to get container status \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b47bbaaa5503931c6eaad431181cb958885bd3e1b962abf8ed769d1d31e6453d\": not found" Jan 30 14:25:45.442319 kubelet[2712]: I0130 14:25:45.442152 2712 scope.go:117] "RemoveContainer" containerID="c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609" Jan 30 14:25:45.446113 containerd[1456]: time="2025-01-30T14:25:45.446041463Z" level=info msg="RemoveContainer for \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\"" Jan 30 14:25:45.449640 containerd[1456]: time="2025-01-30T14:25:45.449567399Z" level=info msg="RemoveContainer for \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\" returns successfully" Jan 30 14:25:45.450071 kubelet[2712]: I0130 14:25:45.449804 2712 scope.go:117] "RemoveContainer" containerID="c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609" Jan 30 14:25:45.450402 kubelet[2712]: E0130 14:25:45.450228 2712 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\": not found" containerID="c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609" Jan 30 14:25:45.450402 kubelet[2712]: I0130 14:25:45.450273 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609"} err="failed to get container status \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\": rpc error: code = NotFound desc = an error occurred when try to find container \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\": not found" Jan 30 14:25:45.450544 containerd[1456]: time="2025-01-30T14:25:45.450061315Z" level=error msg="ContainerStatus for \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c469a087d2537889d9a773b4ab9ee46ffbb2ae697adb450daad3715440560609\": not found" Jan 30 14:25:45.454544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25-rootfs.mount: Deactivated successfully. Jan 30 14:25:45.454768 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3902b0a05b13ff63047002b6f6a2195cd9e7d2fe3c929f7c068690c77137e25-shm.mount: Deactivated successfully. Jan 30 14:25:45.454937 systemd[1]: var-lib-kubelet-pods-af0ddd45\x2d8ee5\x2d4e7d\x2da546\x2d0b8226ca1f83-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccnd4.mount: Deactivated successfully. Jan 30 14:25:45.455105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fa3e7427696bc7f7461d71cc9f005783841e262ce3f6a4b5b683c84fccb590b-rootfs.mount: Deactivated successfully. Jan 30 14:25:45.455253 systemd[1]: var-lib-kubelet-pods-2aeeaad1\x2d925f\x2d4992\x2dab03\x2d0ac020930fce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhj7ft.mount: Deactivated successfully. Jan 30 14:25:45.455405 systemd[1]: var-lib-kubelet-pods-af0ddd45\x2d8ee5\x2d4e7d\x2da546\x2d0b8226ca1f83-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 14:25:45.455558 systemd[1]: var-lib-kubelet-pods-af0ddd45\x2d8ee5\x2d4e7d\x2da546\x2d0b8226ca1f83-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 14:25:46.542620 sshd[4265]: Connection closed by 172.24.4.1 port 41712 Jan 30 14:25:46.543614 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:46.554654 systemd[1]: sshd@22-172.24.4.105:22-172.24.4.1:41712.service: Deactivated successfully. Jan 30 14:25:46.558965 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:25:46.562981 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:25:46.571160 systemd[1]: Started sshd@23-172.24.4.105:22-172.24.4.1:36562.service - OpenSSH per-connection server daemon (172.24.4.1:36562). Jan 30 14:25:46.574055 systemd-logind[1442]: Removed session 25. Jan 30 14:25:46.835261 kubelet[2712]: I0130 14:25:46.835081 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aeeaad1-925f-4992-ab03-0ac020930fce" path="/var/lib/kubelet/pods/2aeeaad1-925f-4992-ab03-0ac020930fce/volumes" Jan 30 14:25:46.836903 kubelet[2712]: I0130 14:25:46.836101 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" path="/var/lib/kubelet/pods/af0ddd45-8ee5-4e7d-a546-0b8226ca1f83/volumes" Jan 30 14:25:46.954538 kubelet[2712]: E0130 14:25:46.954388 2712 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:25:47.852949 sshd[4421]: Accepted publickey for core from 172.24.4.1 port 36562 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:47.855661 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:47.866172 systemd-logind[1442]: New session 26 of user core. Jan 30 14:25:47.876947 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:25:49.194360 kubelet[2712]: I0130 14:25:49.194257 2712 topology_manager.go:215] "Topology Admit Handler" podUID="3af4559b-fdca-4aa3-b569-b24d5d44c571" podNamespace="kube-system" podName="cilium-72f67" Jan 30 14:25:49.194360 kubelet[2712]: E0130 14:25:49.194313 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" containerName="mount-cgroup" Jan 30 14:25:49.194360 kubelet[2712]: E0130 14:25:49.194326 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" containerName="apply-sysctl-overwrites" Jan 30 14:25:49.194360 kubelet[2712]: E0130 14:25:49.194334 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" containerName="mount-bpf-fs" Jan 30 14:25:49.194360 kubelet[2712]: E0130 14:25:49.194341 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" containerName="cilium-agent" Jan 30 14:25:49.194360 kubelet[2712]: E0130 14:25:49.194349 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2aeeaad1-925f-4992-ab03-0ac020930fce" containerName="cilium-operator" Jan 30 14:25:49.194360 kubelet[2712]: E0130 14:25:49.194356 2712 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" containerName="clean-cilium-state" Jan 30 14:25:49.195096 kubelet[2712]: I0130 14:25:49.194381 2712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aeeaad1-925f-4992-ab03-0ac020930fce" containerName="cilium-operator" Jan 30 14:25:49.195096 kubelet[2712]: I0130 14:25:49.194391 2712 memory_manager.go:354] "RemoveStaleState removing state" podUID="af0ddd45-8ee5-4e7d-a546-0b8226ca1f83" containerName="cilium-agent" Jan 30 14:25:49.204866 systemd[1]: Created slice kubepods-burstable-pod3af4559b_fdca_4aa3_b569_b24d5d44c571.slice - libcontainer container kubepods-burstable-pod3af4559b_fdca_4aa3_b569_b24d5d44c571.slice. Jan 30 14:25:49.212282 kubelet[2712]: I0130 14:25:49.212254 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-host-proc-sys-net\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.212787 kubelet[2712]: I0130 14:25:49.212463 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-hostproc\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.212787 kubelet[2712]: I0130 14:25:49.212493 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-xtables-lock\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.212787 kubelet[2712]: I0130 14:25:49.212515 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3af4559b-fdca-4aa3-b569-b24d5d44c571-clustermesh-secrets\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.212787 kubelet[2712]: I0130 14:25:49.212537 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3af4559b-fdca-4aa3-b569-b24d5d44c571-hubble-tls\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.212787 kubelet[2712]: I0130 14:25:49.212557 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-etc-cni-netd\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.212787 kubelet[2712]: I0130 14:25:49.212607 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-lib-modules\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.213039 kubelet[2712]: I0130 14:25:49.212680 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-cni-path\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.213039 kubelet[2712]: I0130 14:25:49.212750 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3af4559b-fdca-4aa3-b569-b24d5d44c571-cilium-config-path\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.213039 kubelet[2712]: I0130 14:25:49.212776 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3af4559b-fdca-4aa3-b569-b24d5d44c571-cilium-ipsec-secrets\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.213039 kubelet[2712]: I0130 14:25:49.212799 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-host-proc-sys-kernel\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.213039 kubelet[2712]: I0130 14:25:49.212819 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-cilium-cgroup\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.213039 kubelet[2712]: I0130 14:25:49.212838 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-bpf-maps\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.213449 kubelet[2712]: I0130 14:25:49.212859 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3af4559b-fdca-4aa3-b569-b24d5d44c571-cilium-run\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.213449 kubelet[2712]: I0130 14:25:49.212878 2712 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqjl\" (UniqueName: \"kubernetes.io/projected/3af4559b-fdca-4aa3-b569-b24d5d44c571-kube-api-access-lrqjl\") pod \"cilium-72f67\" (UID: \"3af4559b-fdca-4aa3-b569-b24d5d44c571\") " pod="kube-system/cilium-72f67" Jan 30 14:25:49.349652 sshd[4423]: Connection closed by 172.24.4.1 port 36562 Jan 30 14:25:49.350418 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:49.360875 systemd[1]: sshd@23-172.24.4.105:22-172.24.4.1:36562.service: Deactivated successfully. Jan 30 14:25:49.367386 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:25:49.371039 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:25:49.384785 systemd[1]: Started sshd@24-172.24.4.105:22-172.24.4.1:36576.service - OpenSSH per-connection server daemon (172.24.4.1:36576). Jan 30 14:25:49.386124 systemd-logind[1442]: Removed session 26. Jan 30 14:25:49.510787 containerd[1456]: time="2025-01-30T14:25:49.510665345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72f67,Uid:3af4559b-fdca-4aa3-b569-b24d5d44c571,Namespace:kube-system,Attempt:0,}" Jan 30 14:25:49.543505 containerd[1456]: time="2025-01-30T14:25:49.543174998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:25:49.543505 containerd[1456]: time="2025-01-30T14:25:49.543239559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:25:49.543505 containerd[1456]: time="2025-01-30T14:25:49.543258966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:25:49.543505 containerd[1456]: time="2025-01-30T14:25:49.543352020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:25:49.568776 systemd[1]: Started cri-containerd-d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c.scope - libcontainer container d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c. Jan 30 14:25:49.600706 containerd[1456]: time="2025-01-30T14:25:49.600382028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72f67,Uid:3af4559b-fdca-4aa3-b569-b24d5d44c571,Namespace:kube-system,Attempt:0,} returns sandbox id \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\"" Jan 30 14:25:49.605804 containerd[1456]: time="2025-01-30T14:25:49.605742444Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:25:49.627205 containerd[1456]: time="2025-01-30T14:25:49.627090017Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d6b78554fab6bccb83b6e16a49dce26e189f3063c719195b45991d6f231c5411\"" Jan 30 14:25:49.628849 containerd[1456]: time="2025-01-30T14:25:49.627973243Z" level=info msg="StartContainer for \"d6b78554fab6bccb83b6e16a49dce26e189f3063c719195b45991d6f231c5411\"" Jan 30 14:25:49.664711 systemd[1]: Started cri-containerd-d6b78554fab6bccb83b6e16a49dce26e189f3063c719195b45991d6f231c5411.scope - libcontainer container d6b78554fab6bccb83b6e16a49dce26e189f3063c719195b45991d6f231c5411. Jan 30 14:25:49.697226 containerd[1456]: time="2025-01-30T14:25:49.697180568Z" level=info msg="StartContainer for \"d6b78554fab6bccb83b6e16a49dce26e189f3063c719195b45991d6f231c5411\" returns successfully" Jan 30 14:25:49.701709 systemd[1]: cri-containerd-d6b78554fab6bccb83b6e16a49dce26e189f3063c719195b45991d6f231c5411.scope: Deactivated successfully. Jan 30 14:25:49.740906 containerd[1456]: time="2025-01-30T14:25:49.740609969Z" level=info msg="shim disconnected" id=d6b78554fab6bccb83b6e16a49dce26e189f3063c719195b45991d6f231c5411 namespace=k8s.io Jan 30 14:25:49.740906 containerd[1456]: time="2025-01-30T14:25:49.740670954Z" level=warning msg="cleaning up after shim disconnected" id=d6b78554fab6bccb83b6e16a49dce26e189f3063c719195b45991d6f231c5411 namespace=k8s.io Jan 30 14:25:49.740906 containerd[1456]: time="2025-01-30T14:25:49.740685341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:25:49.962110 kubelet[2712]: I0130 14:25:49.960461 2712 setters.go:580] "Node became not ready" node="ci-4186-1-0-5-d272c7c7c0.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T14:25:49Z","lastTransitionTime":"2025-01-30T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 14:25:50.403667 containerd[1456]: time="2025-01-30T14:25:50.402759371Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:25:50.438859 containerd[1456]: time="2025-01-30T14:25:50.438655067Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66\"" Jan 30 14:25:50.444388 containerd[1456]: time="2025-01-30T14:25:50.443481682Z" level=info msg="StartContainer for \"4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66\"" Jan 30 14:25:50.485740 systemd[1]: Started cri-containerd-4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66.scope - libcontainer container 4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66. Jan 30 14:25:50.518899 containerd[1456]: time="2025-01-30T14:25:50.518757913Z" level=info msg="StartContainer for \"4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66\" returns successfully" Jan 30 14:25:50.521469 systemd[1]: cri-containerd-4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66.scope: Deactivated successfully. Jan 30 14:25:50.545980 containerd[1456]: time="2025-01-30T14:25:50.545913882Z" level=info msg="shim disconnected" id=4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66 namespace=k8s.io Jan 30 14:25:50.545980 containerd[1456]: time="2025-01-30T14:25:50.545963615Z" level=warning msg="cleaning up after shim disconnected" id=4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66 namespace=k8s.io Jan 30 14:25:50.545980 containerd[1456]: time="2025-01-30T14:25:50.545973383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:25:50.713652 sshd[4437]: Accepted publickey for core from 172.24.4.1 port 36576 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:50.715330 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:50.722366 systemd-logind[1442]: New session 27 of user core. Jan 30 14:25:50.732834 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:25:51.327468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b672f04130a07a48cb070eacf75baf1dd26827840d5b909371df1872206cf66-rootfs.mount: Deactivated successfully. Jan 30 14:25:51.356931 sshd[4599]: Connection closed by 172.24.4.1 port 36576 Jan 30 14:25:51.357886 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Jan 30 14:25:51.369366 systemd[1]: sshd@24-172.24.4.105:22-172.24.4.1:36576.service: Deactivated successfully. Jan 30 14:25:51.374357 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:25:51.378241 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:25:51.388782 systemd[1]: Started sshd@25-172.24.4.105:22-172.24.4.1:36578.service - OpenSSH per-connection server daemon (172.24.4.1:36578). Jan 30 14:25:51.393233 systemd-logind[1442]: Removed session 27. Jan 30 14:25:51.413183 containerd[1456]: time="2025-01-30T14:25:51.413007191Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:25:51.449109 containerd[1456]: time="2025-01-30T14:25:51.448553932Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6\"" Jan 30 14:25:51.449223 containerd[1456]: time="2025-01-30T14:25:51.449172803Z" level=info msg="StartContainer for \"f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6\"" Jan 30 14:25:51.484784 systemd[1]: Started cri-containerd-f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6.scope - libcontainer container f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6. Jan 30 14:25:51.515040 containerd[1456]: time="2025-01-30T14:25:51.515001949Z" level=info msg="StartContainer for \"f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6\" returns successfully" Jan 30 14:25:51.518010 systemd[1]: cri-containerd-f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6.scope: Deactivated successfully. Jan 30 14:25:51.538184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6-rootfs.mount: Deactivated successfully. Jan 30 14:25:51.552429 containerd[1456]: time="2025-01-30T14:25:51.552340893Z" level=info msg="shim disconnected" id=f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6 namespace=k8s.io Jan 30 14:25:51.552429 containerd[1456]: time="2025-01-30T14:25:51.552419050Z" level=warning msg="cleaning up after shim disconnected" id=f87362538aca1d6195e8924602d36e13c6f996ab97553f4ef2901fce3bb93ad6 namespace=k8s.io Jan 30 14:25:51.552429 containerd[1456]: time="2025-01-30T14:25:51.552428928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:25:51.956769 kubelet[2712]: E0130 14:25:51.956653 2712 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:25:52.422280 containerd[1456]: time="2025-01-30T14:25:52.422170181Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:25:52.521968 containerd[1456]: time="2025-01-30T14:25:52.521751381Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d\"" Jan 30 14:25:52.525464 containerd[1456]: time="2025-01-30T14:25:52.523862222Z" level=info msg="StartContainer for \"f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d\"" Jan 30 14:25:52.596747 systemd[1]: Started cri-containerd-f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d.scope - libcontainer container f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d. Jan 30 14:25:52.621297 systemd[1]: cri-containerd-f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d.scope: Deactivated successfully. Jan 30 14:25:52.626193 containerd[1456]: time="2025-01-30T14:25:52.626161321Z" level=info msg="StartContainer for \"f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d\" returns successfully" Jan 30 14:25:52.646324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d-rootfs.mount: Deactivated successfully. Jan 30 14:25:52.649644 containerd[1456]: time="2025-01-30T14:25:52.649439003Z" level=info msg="shim disconnected" id=f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d namespace=k8s.io Jan 30 14:25:52.649739 containerd[1456]: time="2025-01-30T14:25:52.649644910Z" level=warning msg="cleaning up after shim disconnected" id=f6690e709ea2c3973507a323ef57b3086c8c4bb5c36c1ab9a4f7d0d2c8decf0d namespace=k8s.io Jan 30 14:25:52.649739 containerd[1456]: time="2025-01-30T14:25:52.649658465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:25:52.889235 sshd[4605]: Accepted publickey for core from 172.24.4.1 port 36578 ssh2: RSA SHA256:/YvjBQKFe1oUgIfQk7zjOo1Oyu2zepAeLmF7Obt5akA Jan 30 14:25:52.890526 sshd-session[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:25:52.902014 systemd-logind[1442]: New session 28 of user core. Jan 30 14:25:52.907078 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:25:53.430823 containerd[1456]: time="2025-01-30T14:25:53.430777937Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:25:53.468882 containerd[1456]: time="2025-01-30T14:25:53.468815912Z" level=info msg="CreateContainer within sandbox \"d70606d1a4fb35d1c552ac8d3401b2baf4ebc3952460df717b19cb7b9fc0ff3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6\"" Jan 30 14:25:53.469642 containerd[1456]: time="2025-01-30T14:25:53.469559818Z" level=info msg="StartContainer for \"fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6\"" Jan 30 14:25:53.510737 systemd[1]: Started cri-containerd-fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6.scope - libcontainer container fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6. Jan 30 14:25:53.541373 containerd[1456]: time="2025-01-30T14:25:53.541063540Z" level=info msg="StartContainer for \"fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6\" returns successfully" Jan 30 14:25:53.566543 systemd[1]: run-containerd-runc-k8s.io-fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6-runc.jU1d22.mount: Deactivated successfully. Jan 30 14:25:53.885624 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:25:53.955713 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 30 14:25:54.471225 kubelet[2712]: I0130 14:25:54.471059 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-72f67" podStartSLOduration=5.47098735 podStartE2EDuration="5.47098735s" podCreationTimestamp="2025-01-30 14:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:25:54.470370533 +0000 UTC m=+157.824281251" watchObservedRunningTime="2025-01-30 14:25:54.47098735 +0000 UTC m=+157.824898048" Jan 30 14:25:57.117941 systemd-networkd[1366]: lxc_health: Link UP Jan 30 14:25:57.121704 systemd-networkd[1366]: lxc_health: Gained carrier Jan 30 14:25:57.788440 systemd[1]: run-containerd-runc-k8s.io-fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6-runc.lsE553.mount: Deactivated successfully. Jan 30 14:25:58.451745 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 30 14:25:59.976498 systemd[1]: run-containerd-runc-k8s.io-fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6-runc.bpUoiW.mount: Deactivated successfully. Jan 30 14:26:02.173206 systemd[1]: run-containerd-runc-k8s.io-fbc30694e9053bd6e30e3c543a420b391fbe02cf7410c709591aa43ef3050cc6-runc.joZmdz.mount: Deactivated successfully. Jan 30 14:26:02.568774 sshd[4721]: Connection closed by 172.24.4.1 port 36578 Jan 30 14:26:02.569771 sshd-session[4605]: pam_unix(sshd:session): session closed for user core Jan 30 14:26:02.579161 systemd[1]: sshd@25-172.24.4.105:22-172.24.4.1:36578.service: Deactivated successfully. Jan 30 14:26:02.586136 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:26:02.590027 systemd-logind[1442]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:26:02.592681 systemd-logind[1442]: Removed session 28.