Jan 13 20:42:13.052064 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:42:13.052114 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:42:13.052134 kernel: BIOS-provided physical RAM map: Jan 13 20:42:13.052149 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:42:13.052163 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:42:13.052181 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:42:13.052197 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 20:42:13.052212 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 20:42:13.052227 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:42:13.052241 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:42:13.052256 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 20:42:13.052270 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:42:13.052285 kernel: NX (Execute Disable) protection: active Jan 13 20:42:13.052300 kernel: APIC: Static calls initialized Jan 13 20:42:13.053429 kernel: SMBIOS 3.0.0 present. Jan 13 20:42:13.053459 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 20:42:13.053475 kernel: Hypervisor detected: KVM Jan 13 20:42:13.053491 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:42:13.053506 kernel: kvm-clock: using sched offset of 4730708067 cycles Jan 13 20:42:13.053529 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:42:13.053545 kernel: tsc: Detected 1996.249 MHz processor Jan 13 20:42:13.053562 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:42:13.053578 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:42:13.053594 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 20:42:13.053611 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:42:13.053627 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:42:13.053642 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 20:42:13.053658 kernel: ACPI: Early table checksum verification disabled Jan 13 20:42:13.053676 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 20:42:13.053692 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:13.053708 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:13.053724 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:13.053740 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 20:42:13.053755 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:13.053771 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:42:13.053786 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 20:42:13.053802 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 20:42:13.053821 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 20:42:13.053837 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 20:42:13.053852 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 20:42:13.053873 kernel: No NUMA configuration found Jan 13 20:42:13.053890 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 20:42:13.053906 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 20:42:13.053923 kernel: Zone ranges: Jan 13 20:42:13.053942 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:42:13.053959 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:42:13.053975 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:42:13.053991 kernel: Movable zone start for each node Jan 13 20:42:13.054007 kernel: Early memory node ranges Jan 13 20:42:13.054023 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:42:13.054039 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 20:42:13.054055 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:42:13.054076 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 20:42:13.054092 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:42:13.054109 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:42:13.054125 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 20:42:13.054142 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:42:13.054158 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:42:13.054174 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:42:13.054191 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:42:13.054207 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:42:13.054227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:42:13.054243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:42:13.054259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:42:13.054275 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:42:13.054291 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:42:13.054308 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:42:13.055367 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 20:42:13.055380 kernel: Booting paravirtualized kernel on KVM Jan 13 20:42:13.055389 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:42:13.055402 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:42:13.055411 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:42:13.055420 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:42:13.055428 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:42:13.055437 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 20:42:13.055448 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:42:13.055457 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:42:13.055466 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:42:13.055477 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:42:13.055485 kernel: Fallback order for Node 0: 0 Jan 13 20:42:13.055494 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 20:42:13.055503 kernel: Policy zone: Normal Jan 13 20:42:13.055511 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:42:13.055520 kernel: software IO TLB: area num 2. Jan 13 20:42:13.055529 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 229356K reserved, 0K cma-reserved) Jan 13 20:42:13.055538 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:42:13.055548 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:42:13.055557 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:42:13.055566 kernel: Dynamic Preempt: voluntary Jan 13 20:42:13.055575 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:42:13.055584 kernel: rcu: RCU event tracing is enabled. Jan 13 20:42:13.055593 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:42:13.055602 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:42:13.055611 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:42:13.055620 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:42:13.055628 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:42:13.055640 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:42:13.055648 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:42:13.055657 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:42:13.055666 kernel: Console: colour VGA+ 80x25 Jan 13 20:42:13.055675 kernel: printk: console [tty0] enabled Jan 13 20:42:13.055683 kernel: printk: console [ttyS0] enabled Jan 13 20:42:13.055692 kernel: ACPI: Core revision 20230628 Jan 13 20:42:13.055701 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:42:13.055710 kernel: x2apic enabled Jan 13 20:42:13.055721 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:42:13.055729 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:42:13.055738 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:42:13.055747 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 20:42:13.055756 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 20:42:13.055764 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 20:42:13.055773 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:42:13.055782 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:42:13.055791 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:42:13.055801 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:42:13.055810 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:42:13.055819 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 20:42:13.055828 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:42:13.055843 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:42:13.055854 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:42:13.055864 kernel: landlock: Up and running. Jan 13 20:42:13.055873 kernel: SELinux: Initializing. Jan 13 20:42:13.055882 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:42:13.055891 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:42:13.055901 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 20:42:13.055910 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:42:13.055921 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:42:13.055931 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:42:13.055940 kernel: Performance Events: AMD PMU driver. Jan 13 20:42:13.055949 kernel: ... version: 0 Jan 13 20:42:13.055960 kernel: ... bit width: 48 Jan 13 20:42:13.055969 kernel: ... generic registers: 4 Jan 13 20:42:13.055978 kernel: ... value mask: 0000ffffffffffff Jan 13 20:42:13.055987 kernel: ... max period: 00007fffffffffff Jan 13 20:42:13.055997 kernel: ... fixed-purpose events: 0 Jan 13 20:42:13.056006 kernel: ... event mask: 000000000000000f Jan 13 20:42:13.056015 kernel: signal: max sigframe size: 1440 Jan 13 20:42:13.056024 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:42:13.056033 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:42:13.056042 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:42:13.056054 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:42:13.056063 kernel: .... node #0, CPUs: #1 Jan 13 20:42:13.056072 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:42:13.056081 kernel: smpboot: Max logical packages: 2 Jan 13 20:42:13.056090 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 20:42:13.056100 kernel: devtmpfs: initialized Jan 13 20:42:13.056109 kernel: x86/mm: Memory block size: 128MB Jan 13 20:42:13.056118 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:42:13.056127 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:42:13.056138 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:42:13.056147 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:42:13.056156 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:42:13.056166 kernel: audit: type=2000 audit(1736800931.622:1): state=initialized audit_enabled=0 res=1 Jan 13 20:42:13.056175 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:42:13.056184 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:42:13.056193 kernel: cpuidle: using governor menu Jan 13 20:42:13.056202 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:42:13.056212 kernel: dca service started, version 1.12.1 Jan 13 20:42:13.056223 kernel: PCI: Using configuration type 1 for base access Jan 13 20:42:13.056233 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:42:13.056242 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:42:13.056251 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:42:13.056260 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:42:13.056269 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:42:13.056278 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:42:13.056287 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:42:13.056296 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:42:13.056307 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:42:13.056317 kernel: ACPI: Interpreter enabled Jan 13 20:42:13.057365 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:42:13.057376 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:42:13.057385 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:42:13.057395 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:42:13.057404 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 20:42:13.057414 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:42:13.057564 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:42:13.057665 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:42:13.057756 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:42:13.057771 kernel: acpiphp: Slot [3] registered Jan 13 20:42:13.057781 kernel: acpiphp: Slot [4] registered Jan 13 20:42:13.057790 kernel: acpiphp: Slot [5] registered Jan 13 20:42:13.057799 kernel: acpiphp: Slot [6] registered Jan 13 20:42:13.057809 kernel: acpiphp: Slot [7] registered Jan 13 20:42:13.057821 kernel: acpiphp: Slot [8] registered Jan 13 20:42:13.057830 kernel: acpiphp: Slot [9] registered Jan 13 20:42:13.057839 kernel: acpiphp: Slot [10] registered Jan 13 20:42:13.057848 kernel: acpiphp: Slot [11] registered Jan 13 20:42:13.057857 kernel: acpiphp: Slot [12] registered Jan 13 20:42:13.057866 kernel: acpiphp: Slot [13] registered Jan 13 20:42:13.057876 kernel: acpiphp: Slot [14] registered Jan 13 20:42:13.057885 kernel: acpiphp: Slot [15] registered Jan 13 20:42:13.057894 kernel: acpiphp: Slot [16] registered Jan 13 20:42:13.057903 kernel: acpiphp: Slot [17] registered Jan 13 20:42:13.057914 kernel: acpiphp: Slot [18] registered Jan 13 20:42:13.057924 kernel: acpiphp: Slot [19] registered Jan 13 20:42:13.057933 kernel: acpiphp: Slot [20] registered Jan 13 20:42:13.057942 kernel: acpiphp: Slot [21] registered Jan 13 20:42:13.057951 kernel: acpiphp: Slot [22] registered Jan 13 20:42:13.057960 kernel: acpiphp: Slot [23] registered Jan 13 20:42:13.057969 kernel: acpiphp: Slot [24] registered Jan 13 20:42:13.057978 kernel: acpiphp: Slot [25] registered Jan 13 20:42:13.057987 kernel: acpiphp: Slot [26] registered Jan 13 20:42:13.057998 kernel: acpiphp: Slot [27] registered Jan 13 20:42:13.058007 kernel: acpiphp: Slot [28] registered Jan 13 20:42:13.058016 kernel: acpiphp: Slot [29] registered Jan 13 20:42:13.058025 kernel: acpiphp: Slot [30] registered Jan 13 20:42:13.058034 kernel: acpiphp: Slot [31] registered Jan 13 20:42:13.058043 kernel: PCI host bridge to bus 0000:00 Jan 13 20:42:13.058138 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:42:13.058222 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:42:13.058308 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:42:13.059086 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:42:13.059170 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 20:42:13.059250 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:42:13.059391 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:42:13.059496 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:42:13.059597 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 20:42:13.059695 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 20:42:13.059813 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 20:42:13.059922 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 20:42:13.060014 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 20:42:13.060104 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 20:42:13.060208 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:42:13.060305 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 20:42:13.060419 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 20:42:13.060519 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 20:42:13.060611 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 20:42:13.060702 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 20:42:13.060793 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 20:42:13.060884 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 20:42:13.060983 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:42:13.061083 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:42:13.061175 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 20:42:13.061266 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 20:42:13.063398 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 20:42:13.063498 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 20:42:13.063599 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:42:13.063704 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:42:13.063808 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 20:42:13.063906 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 20:42:13.064010 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 20:42:13.064107 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 20:42:13.064203 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 20:42:13.065394 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:42:13.065533 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 20:42:13.065648 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 20:42:13.065756 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 20:42:13.065774 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:42:13.065786 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:42:13.065798 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:42:13.065809 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:42:13.065820 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:42:13.065836 kernel: iommu: Default domain type: Translated Jan 13 20:42:13.065848 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:42:13.065859 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:42:13.065870 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:42:13.065881 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:42:13.065892 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 20:42:13.065996 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 20:42:13.066100 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 20:42:13.066214 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:42:13.066237 kernel: vgaarb: loaded Jan 13 20:42:13.066248 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:42:13.066260 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:42:13.066272 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:42:13.066283 kernel: pnp: PnP ACPI init Jan 13 20:42:13.067411 kernel: pnp 00:03: [dma 2] Jan 13 20:42:13.067431 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:42:13.067442 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:42:13.067457 kernel: NET: Registered PF_INET protocol family Jan 13 20:42:13.067468 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:42:13.067479 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:42:13.067489 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:42:13.067500 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:42:13.067510 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:42:13.067521 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:42:13.067531 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:42:13.067542 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:42:13.067554 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:42:13.067565 kernel: NET: Registered PF_XDP protocol family Jan 13 20:42:13.067653 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:42:13.067738 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:42:13.067822 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:42:13.067906 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 20:42:13.067991 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 20:42:13.068091 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 20:42:13.068193 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:42:13.068209 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:42:13.068220 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:42:13.068230 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 20:42:13.068241 kernel: Initialise system trusted keyrings Jan 13 20:42:13.068251 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:42:13.068262 kernel: Key type asymmetric registered Jan 13 20:42:13.068272 kernel: Asymmetric key parser 'x509' registered Jan 13 20:42:13.068286 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:42:13.068296 kernel: io scheduler mq-deadline registered Jan 13 20:42:13.068306 kernel: io scheduler kyber registered Jan 13 20:42:13.068317 kernel: io scheduler bfq registered Jan 13 20:42:13.068343 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:42:13.068354 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 20:42:13.068364 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:42:13.068375 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:42:13.068385 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:42:13.068396 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:42:13.068410 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:42:13.068420 kernel: random: crng init done Jan 13 20:42:13.068430 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:42:13.068441 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:42:13.068451 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:42:13.068556 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:42:13.068647 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:42:13.068663 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:42:13.068752 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:42:12 UTC (1736800932) Jan 13 20:42:13.068839 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:42:13.068854 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:42:13.068865 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:42:13.068875 kernel: Segment Routing with IPv6 Jan 13 20:42:13.068885 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:42:13.068896 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:42:13.068906 kernel: Key type dns_resolver registered Jan 13 20:42:13.068920 kernel: IPI shorthand broadcast: enabled Jan 13 20:42:13.068930 kernel: sched_clock: Marking stable (1053008475, 164491066)->(1247230381, -29730840) Jan 13 20:42:13.068940 kernel: registered taskstats version 1 Jan 13 20:42:13.068950 kernel: Loading compiled-in X.509 certificates Jan 13 20:42:13.068961 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:42:13.068971 kernel: Key type .fscrypt registered Jan 13 20:42:13.068981 kernel: Key type fscrypt-provisioning registered Jan 13 20:42:13.068992 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:42:13.069002 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:42:13.069015 kernel: ima: No architecture policies found Jan 13 20:42:13.069025 kernel: clk: Disabling unused clocks Jan 13 20:42:13.069035 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:42:13.069046 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:42:13.069056 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:42:13.069067 kernel: Run /init as init process Jan 13 20:42:13.069077 kernel: with arguments: Jan 13 20:42:13.069087 kernel: /init Jan 13 20:42:13.069097 kernel: with environment: Jan 13 20:42:13.069109 kernel: HOME=/ Jan 13 20:42:13.069119 kernel: TERM=linux Jan 13 20:42:13.069130 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:42:13.069143 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:42:13.069157 systemd[1]: Detected virtualization kvm. Jan 13 20:42:13.069169 systemd[1]: Detected architecture x86-64. Jan 13 20:42:13.069180 systemd[1]: Running in initrd. Jan 13 20:42:13.069193 systemd[1]: No hostname configured, using default hostname. Jan 13 20:42:13.069204 systemd[1]: Hostname set to . Jan 13 20:42:13.069215 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:42:13.069226 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:42:13.069238 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:42:13.069249 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:42:13.069261 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:42:13.069286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:42:13.069301 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:42:13.069312 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:42:13.071386 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:42:13.071402 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:42:13.071413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:42:13.071429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:42:13.071440 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:42:13.071450 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:42:13.071460 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:42:13.071471 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:42:13.071481 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:42:13.071491 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:42:13.071502 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:42:13.071514 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:42:13.071524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:42:13.071535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:42:13.071545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:42:13.071556 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:42:13.071566 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:42:13.071576 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:42:13.071587 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:42:13.071598 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:42:13.071610 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:42:13.071620 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:42:13.071631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:42:13.071641 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:42:13.071652 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:42:13.071687 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 20:42:13.071716 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:42:13.071731 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:42:13.071743 systemd-journald[184]: Journal started Jan 13 20:42:13.071766 systemd-journald[184]: Runtime Journal (/run/log/journal/de6c47e6162644f3bea897305b656add) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:42:13.032400 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 20:42:13.117287 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:42:13.117313 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:42:13.117348 kernel: Bridge firewalling registered Jan 13 20:42:13.077150 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 20:42:13.118253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:42:13.119013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:13.120148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:42:13.128455 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:42:13.130146 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:42:13.136435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:42:13.141421 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:42:13.147595 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:42:13.150572 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:42:13.158170 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:42:13.159571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:42:13.167344 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:42:13.174606 dracut-cmdline[215]: dracut-dracut-053 Jan 13 20:42:13.179602 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:42:13.178478 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:42:13.212294 systemd-resolved[227]: Positive Trust Anchors: Jan 13 20:42:13.212310 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:42:13.213911 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:42:13.218877 systemd-resolved[227]: Defaulting to hostname 'linux'. Jan 13 20:42:13.219895 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:42:13.220695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:42:13.259405 kernel: SCSI subsystem initialized Jan 13 20:42:13.269509 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:42:13.281417 kernel: iscsi: registered transport (tcp) Jan 13 20:42:13.304848 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:42:13.304975 kernel: QLogic iSCSI HBA Driver Jan 13 20:42:13.366888 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:42:13.375685 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:42:13.426578 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:42:13.426677 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:42:13.429900 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:42:13.493474 kernel: raid6: sse2x4 gen() 5115 MB/s Jan 13 20:42:13.510437 kernel: raid6: sse2x2 gen() 5937 MB/s Jan 13 20:42:13.528998 kernel: raid6: sse2x1 gen() 9364 MB/s Jan 13 20:42:13.529111 kernel: raid6: using algorithm sse2x1 gen() 9364 MB/s Jan 13 20:42:13.547928 kernel: raid6: .... xor() 7325 MB/s, rmw enabled Jan 13 20:42:13.547989 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 20:42:13.570429 kernel: xor: measuring software checksum speed Jan 13 20:42:13.570501 kernel: prefetch64-sse : 18503 MB/sec Jan 13 20:42:13.571425 kernel: generic_sse : 15310 MB/sec Jan 13 20:42:13.573941 kernel: xor: using function: prefetch64-sse (18503 MB/sec) Jan 13 20:42:13.748411 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:42:13.767028 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:42:13.776730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:42:13.791747 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 13 20:42:13.796126 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:42:13.807635 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:42:13.834196 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 13 20:42:13.877814 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:42:13.893747 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:42:13.937265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:42:13.949930 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:42:13.971722 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:42:13.990850 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:42:13.992271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:42:13.994906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:42:14.001442 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:42:14.022527 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:42:14.056503 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 20:42:14.087872 kernel: libata version 3.00 loaded. Jan 13 20:42:14.087892 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 20:42:14.088003 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 20:42:14.088128 kernel: scsi host0: ata_piix Jan 13 20:42:14.088243 kernel: scsi host1: ata_piix Jan 13 20:42:14.088380 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 20:42:14.088398 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 20:42:14.088410 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:42:14.088421 kernel: GPT:17805311 != 20971519 Jan 13 20:42:14.088432 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:42:14.088445 kernel: GPT:17805311 != 20971519 Jan 13 20:42:14.088456 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:42:14.088467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:42:14.072633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:42:14.072982 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:42:14.081100 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:42:14.081874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:42:14.082015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:14.083092 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:42:14.097993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:42:14.151579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:14.158524 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:42:14.186468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:42:14.280460 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (454) Jan 13 20:42:14.289499 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (464) Jan 13 20:42:14.305746 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:42:14.328559 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:42:14.333289 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:42:14.333884 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:42:14.340722 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:42:14.353499 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:42:14.367078 disk-uuid[514]: Primary Header is updated. Jan 13 20:42:14.367078 disk-uuid[514]: Secondary Entries is updated. Jan 13 20:42:14.367078 disk-uuid[514]: Secondary Header is updated. Jan 13 20:42:14.376542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:42:15.394425 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:42:15.395958 disk-uuid[515]: The operation has completed successfully. Jan 13 20:42:15.476484 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:42:15.476671 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:42:15.504468 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:42:15.523938 sh[526]: Success Jan 13 20:42:15.568442 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 20:42:15.646251 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:42:15.648416 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:42:15.649374 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:42:15.679902 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:42:15.680010 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:42:15.681941 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:42:15.685232 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:42:15.685296 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:42:15.702820 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:42:15.703935 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:42:15.714520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:42:15.718498 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:42:15.748361 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:15.756636 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:42:15.756665 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:42:15.767425 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:42:15.781399 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:15.780838 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:42:15.800910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:42:15.806502 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:42:15.905296 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:42:15.916546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:42:15.938759 systemd-networkd[713]: lo: Link UP Jan 13 20:42:15.939507 systemd-networkd[713]: lo: Gained carrier Jan 13 20:42:15.941462 systemd-networkd[713]: Enumeration completed Jan 13 20:42:15.941787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:42:15.942456 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:42:15.942460 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:42:15.945555 systemd-networkd[713]: eth0: Link UP Jan 13 20:42:15.945558 systemd-networkd[713]: eth0: Gained carrier Jan 13 20:42:15.945565 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:42:15.946524 systemd[1]: Reached target network.target - Network. Jan 13 20:42:15.956418 systemd-networkd[713]: eth0: DHCPv4 address 172.24.4.153/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:42:15.969659 ignition[642]: Ignition 2.20.0 Jan 13 20:42:15.969673 ignition[642]: Stage: fetch-offline Jan 13 20:42:15.971593 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:42:15.969715 ignition[642]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:15.969726 ignition[642]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:42:15.969830 ignition[642]: parsed url from cmdline: "" Jan 13 20:42:15.969834 ignition[642]: no config URL provided Jan 13 20:42:15.969839 ignition[642]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:42:15.969848 ignition[642]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:42:15.969853 ignition[642]: failed to fetch config: resource requires networking Jan 13 20:42:15.970026 ignition[642]: Ignition finished successfully Jan 13 20:42:15.977532 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:42:15.991577 ignition[722]: Ignition 2.20.0 Jan 13 20:42:15.991590 ignition[722]: Stage: fetch Jan 13 20:42:15.991775 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:15.991786 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:42:15.991907 ignition[722]: parsed url from cmdline: "" Jan 13 20:42:15.991912 ignition[722]: no config URL provided Jan 13 20:42:15.991918 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:42:15.991926 ignition[722]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:42:15.992069 ignition[722]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 20:42:15.992085 ignition[722]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 20:42:15.992179 ignition[722]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 20:42:16.268623 ignition[722]: GET result: OK Jan 13 20:42:16.270033 ignition[722]: parsing config with SHA512: 7bbe9ce750bcfa9d531483d8a44b1de6e2238f9a8762c382a100d3ef3313f4d722cbad5171931826d351eddd5a3de3c4e31b2a3e772cbc0e29bca4d4e7ef7557 Jan 13 20:42:16.283450 unknown[722]: fetched base config from "system" Jan 13 20:42:16.283480 unknown[722]: fetched base config from "system" Jan 13 20:42:16.284574 ignition[722]: fetch: fetch complete Jan 13 20:42:16.283495 unknown[722]: fetched user config from "openstack" Jan 13 20:42:16.284586 ignition[722]: fetch: fetch passed Jan 13 20:42:16.288124 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:42:16.284675 ignition[722]: Ignition finished successfully Jan 13 20:42:16.298718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:42:16.334128 ignition[728]: Ignition 2.20.0 Jan 13 20:42:16.334158 ignition[728]: Stage: kargs Jan 13 20:42:16.334690 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:16.334718 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:42:16.337319 ignition[728]: kargs: kargs passed Jan 13 20:42:16.339684 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:42:16.337463 ignition[728]: Ignition finished successfully Jan 13 20:42:16.350685 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:42:16.388031 ignition[735]: Ignition 2.20.0 Jan 13 20:42:16.388411 ignition[735]: Stage: disks Jan 13 20:42:16.388829 ignition[735]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:16.388856 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:42:16.396442 ignition[735]: disks: disks passed Jan 13 20:42:16.396623 ignition[735]: Ignition finished successfully Jan 13 20:42:16.398560 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:42:16.401297 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:42:16.403149 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:42:16.406143 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:42:16.409121 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:42:16.411704 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:42:16.421570 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:42:16.465597 systemd-fsck[744]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:42:16.477601 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:42:16.484564 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:42:16.650390 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:42:16.650830 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:42:16.652392 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:42:16.662472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:42:16.666638 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:42:16.670234 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:42:16.675383 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (752) Jan 13 20:42:16.677469 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 20:42:16.694620 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:16.694650 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:42:16.694663 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:42:16.691912 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:42:16.691948 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:42:16.697979 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:42:16.703421 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:42:16.705599 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:42:16.720158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:42:16.848979 initrd-setup-root[780]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:42:16.867206 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:42:16.875134 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:42:16.884551 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:42:16.996476 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:42:17.002499 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:42:17.004730 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:42:17.013716 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:17.013582 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:42:17.045373 ignition[869]: INFO : Ignition 2.20.0 Jan 13 20:42:17.045373 ignition[869]: INFO : Stage: mount Jan 13 20:42:17.045373 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:17.045373 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:42:17.053456 ignition[869]: INFO : mount: mount passed Jan 13 20:42:17.053456 ignition[869]: INFO : Ignition finished successfully Jan 13 20:42:17.050551 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:42:17.064129 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:42:17.713660 systemd-networkd[713]: eth0: Gained IPv6LL Jan 13 20:42:23.949095 coreos-metadata[754]: Jan 13 20:42:23.948 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:42:23.989476 coreos-metadata[754]: Jan 13 20:42:23.989 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:42:24.002435 coreos-metadata[754]: Jan 13 20:42:24.002 INFO Fetch successful Jan 13 20:42:24.004024 coreos-metadata[754]: Jan 13 20:42:24.002 INFO wrote hostname ci-4186-1-0-b-778e6b4119.novalocal to /sysroot/etc/hostname Jan 13 20:42:24.007687 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 20:42:24.007932 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 20:42:24.020618 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:42:24.054983 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:42:24.073475 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (885) Jan 13 20:42:24.075376 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:42:24.080781 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:42:24.084951 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:42:24.096652 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:42:24.101016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:42:24.148914 ignition[903]: INFO : Ignition 2.20.0 Jan 13 20:42:24.148914 ignition[903]: INFO : Stage: files Jan 13 20:42:24.151873 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:24.151873 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:42:24.151873 ignition[903]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:42:24.165111 ignition[903]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:42:24.165111 ignition[903]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:42:24.226274 ignition[903]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:42:24.228487 ignition[903]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:42:24.228487 ignition[903]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:42:24.228397 unknown[903]: wrote ssh authorized keys file for user: core Jan 13 20:42:24.259171 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:42:24.261793 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:42:24.326995 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:42:24.625037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:42:24.625037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:42:24.625037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:42:25.238506 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:42:25.865490 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:42:25.865490 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:42:25.865490 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:42:25.865490 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:42:25.865490 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:42:25.865490 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:42:25.878383 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 20:42:26.391205 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:42:27.972355 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:42:27.973897 ignition[903]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:42:27.975579 ignition[903]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:42:27.975579 ignition[903]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:42:27.975579 ignition[903]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:42:27.975579 ignition[903]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:42:27.975579 ignition[903]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:42:27.987154 ignition[903]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:42:27.987154 ignition[903]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:42:27.987154 ignition[903]: INFO : files: files passed Jan 13 20:42:27.987154 ignition[903]: INFO : Ignition finished successfully Jan 13 20:42:27.980103 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:42:27.989572 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:42:27.992468 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:42:28.010385 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:42:28.011055 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:42:28.016988 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:42:28.016988 initrd-setup-root-after-ignition[932]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:42:28.019708 initrd-setup-root-after-ignition[936]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:42:28.021707 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:42:28.024507 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:42:28.039587 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:42:28.064677 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:42:28.064774 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:42:28.066847 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:42:28.068555 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:42:28.070423 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:42:28.080569 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:42:28.094012 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:42:28.099577 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:42:28.114381 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:42:28.115034 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:42:28.115732 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:42:28.117626 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:42:28.117746 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:42:28.120153 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:42:28.121121 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:42:28.122701 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:42:28.124763 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:42:28.126368 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:42:28.127983 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:42:28.129829 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:42:28.131845 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:42:28.133782 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:42:28.135613 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:42:28.137447 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:42:28.137556 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:42:28.140420 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:42:28.141531 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:42:28.143278 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:42:28.145423 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:42:28.146073 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:42:28.146223 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:42:28.148242 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:42:28.148381 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:42:28.149047 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:42:28.149153 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:42:28.159802 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:42:28.160365 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:42:28.160541 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:42:28.162551 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:42:28.165831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:42:28.166431 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:42:28.167627 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:42:28.167819 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:42:28.174904 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:42:28.175001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:42:28.183345 ignition[956]: INFO : Ignition 2.20.0 Jan 13 20:42:28.183345 ignition[956]: INFO : Stage: umount Jan 13 20:42:28.183345 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:42:28.183345 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:42:28.189502 ignition[956]: INFO : umount: umount passed Jan 13 20:42:28.189502 ignition[956]: INFO : Ignition finished successfully Jan 13 20:42:28.189557 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:42:28.189648 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:42:28.190952 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:42:28.191028 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:42:28.192964 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:42:28.193004 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:42:28.194009 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:42:28.194048 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:42:28.196640 systemd[1]: Stopped target network.target - Network. Jan 13 20:42:28.197561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:42:28.197604 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:42:28.198606 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:42:28.199605 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:42:28.204551 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:42:28.206674 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:42:28.207923 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:42:28.208879 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:42:28.208916 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:42:28.209804 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:42:28.209835 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:42:28.210772 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:42:28.210821 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:42:28.211766 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:42:28.211804 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:42:28.212790 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:42:28.214299 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:42:28.216242 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:42:28.216813 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:42:28.216894 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:42:28.217426 systemd-networkd[713]: eth0: DHCPv6 lease lost Jan 13 20:42:28.217868 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:42:28.217925 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:42:28.221820 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:42:28.221918 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:42:28.223275 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:42:28.223512 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:42:28.231489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:42:28.232525 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:42:28.232612 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:42:28.233970 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:42:28.234959 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:42:28.235073 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:42:28.239299 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:42:28.239412 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:42:28.241147 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:42:28.241861 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:42:28.243197 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:42:28.243951 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:42:28.250499 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:42:28.251310 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:42:28.252223 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:42:28.252312 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:42:28.253768 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:42:28.253818 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:42:28.254959 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:42:28.254992 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:42:28.256129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:42:28.256171 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:42:28.257771 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:42:28.257810 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:42:28.258949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:42:28.258994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:42:28.264491 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:42:28.265068 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:42:28.265119 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:42:28.268017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:42:28.268071 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:28.271680 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:42:28.271767 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:42:28.273213 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:42:28.281481 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:42:28.288870 systemd[1]: Switching root. Jan 13 20:42:28.325215 systemd-journald[184]: Journal stopped Jan 13 20:42:29.900432 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 20:42:29.900512 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:42:29.900538 kernel: SELinux: policy capability open_perms=1 Jan 13 20:42:29.900558 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:42:29.900577 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:42:29.900596 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:42:29.900613 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:42:29.900627 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:42:29.900646 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:42:29.900661 kernel: audit: type=1403 audit(1736800948.878:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:42:29.900684 systemd[1]: Successfully loaded SELinux policy in 65.821ms. Jan 13 20:42:29.900711 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.436ms. Jan 13 20:42:29.900727 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:42:29.900743 systemd[1]: Detected virtualization kvm. Jan 13 20:42:29.900759 systemd[1]: Detected architecture x86-64. Jan 13 20:42:29.900773 systemd[1]: Detected first boot. Jan 13 20:42:29.900789 systemd[1]: Hostname set to . Jan 13 20:42:29.900804 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:42:29.900821 zram_generator::config[1000]: No configuration found. Jan 13 20:42:29.900839 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:42:29.900853 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:42:29.900873 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:42:29.900889 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:42:29.900905 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:42:29.900921 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:42:29.900936 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:42:29.900953 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:42:29.900969 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:42:29.900985 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:42:29.901000 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:42:29.901015 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:42:29.901030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:42:29.901046 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:42:29.901061 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:42:29.901076 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:42:29.901095 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:42:29.901110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:42:29.901125 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:42:29.901140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:42:29.901155 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:42:29.901171 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:42:29.901189 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:42:29.901205 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:42:29.901220 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:42:29.901240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:42:29.901259 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:42:29.901274 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:42:29.901289 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:42:29.901304 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:42:29.901319 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:42:29.905457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:42:29.905476 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:42:29.905492 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:42:29.905507 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:42:29.905523 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:42:29.905538 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:42:29.905554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:42:29.905569 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:42:29.905583 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:42:29.905602 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:42:29.905618 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:42:29.905634 systemd[1]: Reached target machines.target - Containers. Jan 13 20:42:29.905649 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:42:29.905665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:42:29.905681 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:42:29.905696 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:42:29.905711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:42:29.905726 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:42:29.905743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:42:29.905759 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:42:29.905774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:42:29.905790 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:42:29.905805 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:42:29.905820 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:42:29.905835 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:42:29.905849 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:42:29.905867 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:42:29.905882 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:42:29.905897 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:42:29.905912 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:42:29.905927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:42:29.905942 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:42:29.905957 systemd[1]: Stopped verity-setup.service. Jan 13 20:42:29.905972 kernel: ACPI: bus type drm_connector registered Jan 13 20:42:29.905987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:42:29.906004 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:42:29.906020 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:42:29.906034 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:42:29.906049 kernel: loop: module loaded Jan 13 20:42:29.906064 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:42:29.906102 systemd-journald[1096]: Collecting audit messages is disabled. Jan 13 20:42:29.906132 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:42:29.906148 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:42:29.906163 kernel: fuse: init (API version 7.39) Jan 13 20:42:29.906177 systemd-journald[1096]: Journal started Jan 13 20:42:29.906213 systemd-journald[1096]: Runtime Journal (/run/log/journal/de6c47e6162644f3bea897305b656add) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:42:29.533955 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:42:29.556971 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:42:29.909528 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:42:29.557342 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:42:29.910800 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:42:29.911761 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:42:29.912733 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:42:29.912915 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:42:29.913781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:42:29.913912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:42:29.914708 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:42:29.914902 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:42:29.915735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:42:29.915881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:42:29.916628 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:42:29.916744 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:42:29.917504 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:42:29.917641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:42:29.918676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:42:29.919614 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:42:29.920374 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:42:29.931235 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:42:29.938028 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:42:29.948456 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:42:29.949049 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:42:29.949095 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:42:29.953766 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:42:29.958544 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:42:29.960457 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:42:29.961592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:42:29.970282 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:42:29.974196 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:42:29.975722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:42:29.981565 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:42:29.982667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:42:29.984936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:42:29.994536 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:42:30.003266 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:42:30.010931 systemd-journald[1096]: Time spent on flushing to /var/log/journal/de6c47e6162644f3bea897305b656add is 49.442ms for 944 entries. Jan 13 20:42:30.010931 systemd-journald[1096]: System Journal (/var/log/journal/de6c47e6162644f3bea897305b656add) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:42:30.096484 systemd-journald[1096]: Received client request to flush runtime journal. Jan 13 20:42:30.096544 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:42:30.013627 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:42:30.014593 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:42:30.015173 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:42:30.015947 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:42:30.016728 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:42:30.021099 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:42:30.034560 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:42:30.037499 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:42:30.076889 udevadm[1142]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:42:30.098099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:42:30.100023 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:42:30.138393 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:42:30.139749 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:42:30.169390 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:42:30.174041 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:42:30.180655 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:42:30.193740 kernel: loop1: detected capacity change from 0 to 141000 Jan 13 20:42:30.233480 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 13 20:42:30.233499 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 13 20:42:30.238877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:42:30.255354 kernel: loop2: detected capacity change from 0 to 8 Jan 13 20:42:30.274409 kernel: loop3: detected capacity change from 0 to 210664 Jan 13 20:42:30.345468 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:42:30.432355 kernel: loop5: detected capacity change from 0 to 141000 Jan 13 20:42:30.499384 kernel: loop6: detected capacity change from 0 to 8 Jan 13 20:42:30.502359 kernel: loop7: detected capacity change from 0 to 210664 Jan 13 20:42:30.533137 (sd-merge)[1159]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 20:42:30.533672 (sd-merge)[1159]: Merged extensions into '/usr'. Jan 13 20:42:30.544595 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:42:30.544617 systemd[1]: Reloading... Jan 13 20:42:30.617403 zram_generator::config[1181]: No configuration found. Jan 13 20:42:30.859541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:42:30.930962 systemd[1]: Reloading finished in 385 ms. Jan 13 20:42:30.955602 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:42:30.956587 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:42:30.965535 systemd[1]: Starting ensure-sysext.service... Jan 13 20:42:30.967545 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:42:30.974611 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:42:30.986391 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:42:30.987445 ldconfig[1128]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:42:30.986412 systemd[1]: Reloading... Jan 13 20:42:31.013821 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:42:31.014581 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:42:31.017660 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:42:31.018019 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 13 20:42:31.018146 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 13 20:42:31.020528 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Jan 13 20:42:31.024180 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:42:31.024190 systemd-tmpfiles[1242]: Skipping /boot Jan 13 20:42:31.041449 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:42:31.041464 systemd-tmpfiles[1242]: Skipping /boot Jan 13 20:42:31.073353 zram_generator::config[1266]: No configuration found. Jan 13 20:42:31.221407 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1281) Jan 13 20:42:31.238351 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 20:42:31.248459 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:42:31.296410 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:42:31.321345 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:42:31.337355 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:42:31.348856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:42:31.371339 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 20:42:31.371408 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 20:42:31.377336 kernel: Console: switching to colour dummy device 80x25 Jan 13 20:42:31.377374 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:42:31.377393 kernel: [drm] features: -context_init Jan 13 20:42:31.381354 kernel: [drm] number of scanouts: 1 Jan 13 20:42:31.381390 kernel: [drm] number of cap sets: 0 Jan 13 20:42:31.384355 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 20:42:31.389349 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 20:42:31.389393 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:42:31.399359 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:42:31.428741 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:42:31.429031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:42:31.429546 systemd[1]: Reloading finished in 442 ms. Jan 13 20:42:31.446102 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:42:31.448624 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:42:31.456759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:42:31.483285 systemd[1]: Finished ensure-sysext.service. Jan 13 20:42:31.518928 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:42:31.523628 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:42:31.528478 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:42:31.541666 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:42:31.542096 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:42:31.546012 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:42:31.551009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:42:31.558705 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:42:31.572662 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:42:31.581641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:42:31.582110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:42:31.584788 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:42:31.589633 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:42:31.600597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:42:31.612227 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:42:31.615568 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:42:31.626928 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:42:31.634583 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:42:31.636468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:42:31.638210 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:42:31.638980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:42:31.640407 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:42:31.640837 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:42:31.641112 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:42:31.643643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:42:31.643773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:42:31.651380 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:42:31.666135 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:42:31.666393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:42:31.671385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:42:31.678065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:42:31.690813 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:42:31.693077 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:42:31.693231 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:42:31.698214 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:42:31.699074 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:42:31.704639 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:42:31.726485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:42:31.733711 augenrules[1405]: No rules Jan 13 20:42:31.737556 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:42:31.740870 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:42:31.741202 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:42:31.745771 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:42:31.763725 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:42:31.775726 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:42:31.804778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:42:31.826157 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:42:31.828742 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:42:31.870356 systemd-resolved[1378]: Positive Trust Anchors: Jan 13 20:42:31.870374 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:42:31.870417 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:42:31.875268 systemd-resolved[1378]: Using system hostname 'ci-4186-1-0-b-778e6b4119.novalocal'. Jan 13 20:42:31.876752 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:42:31.878695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:42:31.888991 systemd-networkd[1376]: lo: Link UP Jan 13 20:42:31.889005 systemd-networkd[1376]: lo: Gained carrier Jan 13 20:42:31.890239 systemd-networkd[1376]: Enumeration completed Jan 13 20:42:31.890382 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:42:31.892113 systemd[1]: Reached target network.target - Network. Jan 13 20:42:31.892569 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:42:31.892573 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:42:31.893220 systemd-networkd[1376]: eth0: Link UP Jan 13 20:42:31.893224 systemd-networkd[1376]: eth0: Gained carrier Jan 13 20:42:31.893238 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:42:31.909538 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:42:31.910204 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:42:31.910745 systemd-networkd[1376]: eth0: DHCPv4 address 172.24.4.153/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:42:31.910869 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:42:31.913206 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Jan 13 20:42:31.913828 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:42:31.914418 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:42:31.914929 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:42:31.918728 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:42:31.918776 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:42:31.920978 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:42:31.923406 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:42:31.925957 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:42:31.928047 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:42:31.934466 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:42:31.937727 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:42:31.948277 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:42:31.950959 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:42:31.953202 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:42:31.953250 systemd-timesyncd[1379]: Contacted time server 194.57.169.1:123 (0.flatcar.pool.ntp.org). Jan 13 20:42:31.953309 systemd-timesyncd[1379]: Initial clock synchronization to Mon 2025-01-13 20:42:32.032615 UTC. Jan 13 20:42:31.953931 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:42:31.956055 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:42:31.956088 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:42:31.962500 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:42:31.969978 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:42:31.973843 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:42:31.983501 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:42:31.992431 jq[1434]: false Jan 13 20:42:31.995730 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:42:31.998466 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:42:32.009531 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:42:32.016468 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:42:32.026538 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:42:32.033465 extend-filesystems[1437]: Found loop4 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found loop5 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found loop6 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found loop7 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found vda Jan 13 20:42:32.040327 extend-filesystems[1437]: Found vda1 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found vda2 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found vda3 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found usr Jan 13 20:42:32.040327 extend-filesystems[1437]: Found vda4 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found vda6 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found vda7 Jan 13 20:42:32.040327 extend-filesystems[1437]: Found vda9 Jan 13 20:42:32.040327 extend-filesystems[1437]: Checking size of /dev/vda9 Jan 13 20:42:32.162835 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 20:42:32.162866 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 20:42:32.162886 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1269) Jan 13 20:42:32.035490 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:42:32.163007 extend-filesystems[1437]: Resized partition /dev/vda9 Jan 13 20:42:32.056322 dbus-daemon[1433]: [system] SELinux support is enabled Jan 13 20:42:32.053402 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:42:32.191956 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:42:32.191956 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:42:32.191956 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:42:32.191956 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 20:42:32.066897 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:42:32.244886 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jan 13 20:42:32.069630 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:42:32.249743 jq[1456]: true Jan 13 20:42:32.073506 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:42:32.250100 update_engine[1452]: I20250113 20:42:32.173952 1452 main.cc:92] Flatcar Update Engine starting Jan 13 20:42:32.250100 update_engine[1452]: I20250113 20:42:32.189908 1452 update_check_scheduler.cc:74] Next update check in 5m24s Jan 13 20:42:32.091506 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:42:32.112362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:42:32.254676 jq[1463]: true Jan 13 20:42:32.130258 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:42:32.130580 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:42:32.130878 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:42:32.131023 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:42:32.141951 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:42:32.142359 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:42:32.149953 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:42:32.150157 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:42:32.186869 systemd-logind[1450]: New seat seat0. Jan 13 20:42:32.192425 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:42:32.213806 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:42:32.220882 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:42:32.220913 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:42:32.225940 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:42:32.225960 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:42:32.239877 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:42:32.239900 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:42:32.251556 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:42:32.255173 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:42:32.276751 tar[1461]: linux-amd64/helm Jan 13 20:42:32.349801 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:42:32.351114 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:42:32.364848 systemd[1]: Starting sshkeys.service... Jan 13 20:42:32.408816 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:42:32.424003 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:42:32.520263 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:42:32.622276 containerd[1466]: time="2025-01-13T20:42:32.622164284Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:42:32.666422 containerd[1466]: time="2025-01-13T20:42:32.666123162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:42:32.675945 containerd[1466]: time="2025-01-13T20:42:32.675815386Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:42:32.675945 containerd[1466]: time="2025-01-13T20:42:32.675852146Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:42:32.675945 containerd[1466]: time="2025-01-13T20:42:32.675872919Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676209910Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676282975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676380465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676397585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676576690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676595571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676612357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676624631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676715898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.676932765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677246 containerd[1466]: time="2025-01-13T20:42:32.677052414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:42:32.677539 containerd[1466]: time="2025-01-13T20:42:32.677069898Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:42:32.677539 containerd[1466]: time="2025-01-13T20:42:32.677159486Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:42:32.677539 containerd[1466]: time="2025-01-13T20:42:32.677222513Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:42:32.685072 containerd[1466]: time="2025-01-13T20:42:32.685037695Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:42:32.685248 containerd[1466]: time="2025-01-13T20:42:32.685229862Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:42:32.685409 containerd[1466]: time="2025-01-13T20:42:32.685391664Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:42:32.685494 containerd[1466]: time="2025-01-13T20:42:32.685477539Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:42:32.685572 containerd[1466]: time="2025-01-13T20:42:32.685556554Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:42:32.685790 containerd[1466]: time="2025-01-13T20:42:32.685739027Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686117170Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686242192Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686261974Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686279984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686297195Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686314447Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686354061Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686374732Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686395657Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686410925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686426467Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686439925Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686463146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.686775 containerd[1466]: time="2025-01-13T20:42:32.686479862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686496365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686512625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686536009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686554343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686568600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686583888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686599410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686617512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686632062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686647199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686661890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686680134Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686703345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686717339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.687075 containerd[1466]: time="2025-01-13T20:42:32.686729288Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:42:32.687437 containerd[1466]: time="2025-01-13T20:42:32.687419562Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:42:32.687564 containerd[1466]: time="2025-01-13T20:42:32.687545161Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:42:32.688353 containerd[1466]: time="2025-01-13T20:42:32.687609068Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:42:32.688353 containerd[1466]: time="2025-01-13T20:42:32.687629265Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:42:32.688353 containerd[1466]: time="2025-01-13T20:42:32.687641579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.688353 containerd[1466]: time="2025-01-13T20:42:32.687659791Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:42:32.688353 containerd[1466]: time="2025-01-13T20:42:32.687671619Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:42:32.688353 containerd[1466]: time="2025-01-13T20:42:32.687685968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:42:32.688507 containerd[1466]: time="2025-01-13T20:42:32.687990377Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:42:32.688507 containerd[1466]: time="2025-01-13T20:42:32.688047425Z" level=info msg="Connect containerd service" Jan 13 20:42:32.688507 containerd[1466]: time="2025-01-13T20:42:32.688075958Z" level=info msg="using legacy CRI server" Jan 13 20:42:32.688507 containerd[1466]: time="2025-01-13T20:42:32.688083811Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:42:32.688507 containerd[1466]: time="2025-01-13T20:42:32.688193989Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:42:32.689409 containerd[1466]: time="2025-01-13T20:42:32.689386941Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:42:32.689609 containerd[1466]: time="2025-01-13T20:42:32.689574202Z" level=info msg="Start subscribing containerd event" Jan 13 20:42:32.689745 containerd[1466]: time="2025-01-13T20:42:32.689728870Z" level=info msg="Start recovering state" Jan 13 20:42:32.689862 containerd[1466]: time="2025-01-13T20:42:32.689845757Z" level=info msg="Start event monitor" Jan 13 20:42:32.689989 containerd[1466]: time="2025-01-13T20:42:32.689920218Z" level=info msg="Start snapshots syncer" Jan 13 20:42:32.690074 containerd[1466]: time="2025-01-13T20:42:32.690059669Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:42:32.690132 containerd[1466]: time="2025-01-13T20:42:32.690119589Z" level=info msg="Start streaming server" Jan 13 20:42:32.690652 containerd[1466]: time="2025-01-13T20:42:32.690634027Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:42:32.690818 containerd[1466]: time="2025-01-13T20:42:32.690756519Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:42:32.691399 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:42:32.696962 containerd[1466]: time="2025-01-13T20:42:32.696600365Z" level=info msg="containerd successfully booted in 0.075477s" Jan 13 20:42:32.934598 tar[1461]: linux-amd64/LICENSE Jan 13 20:42:32.934943 tar[1461]: linux-amd64/README.md Jan 13 20:42:32.938534 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:42:32.946599 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:42:32.964252 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:42:32.974696 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:42:32.982050 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:42:32.982255 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:42:32.991699 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:42:33.001755 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:42:33.014711 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:42:33.018929 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:42:33.023504 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:42:33.329853 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 13 20:42:33.335626 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:42:33.341094 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:42:33.353024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:42:33.365933 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:42:33.427015 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:42:35.325633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:42:35.342142 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:42:36.606927 kubelet[1546]: E0113 20:42:36.606821 1546 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:42:36.610904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:42:36.611230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:42:36.611928 systemd[1]: kubelet.service: Consumed 2.165s CPU time. Jan 13 20:42:37.706233 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:42:37.716062 systemd[1]: Started sshd@0-172.24.4.153:22-172.24.4.1:34058.service - OpenSSH per-connection server daemon (172.24.4.1:34058). Jan 13 20:42:38.051561 agetty[1526]: failed to open credentials directory Jan 13 20:42:38.051698 agetty[1527]: failed to open credentials directory Jan 13 20:42:38.118127 login[1527]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jan 13 20:42:38.120289 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:42:38.148264 systemd-logind[1450]: New session 2 of user core. Jan 13 20:42:38.152557 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:42:38.160022 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:42:38.200385 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:42:38.210042 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:42:38.235604 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:42:38.397151 systemd[1564]: Queued start job for default target default.target. Jan 13 20:42:38.409584 systemd[1564]: Created slice app.slice - User Application Slice. Jan 13 20:42:38.409685 systemd[1564]: Reached target paths.target - Paths. Jan 13 20:42:38.409703 systemd[1564]: Reached target timers.target - Timers. Jan 13 20:42:38.411181 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:42:38.439915 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:42:38.440163 systemd[1564]: Reached target sockets.target - Sockets. Jan 13 20:42:38.440246 systemd[1564]: Reached target basic.target - Basic System. Jan 13 20:42:38.440285 systemd[1564]: Reached target default.target - Main User Target. Jan 13 20:42:38.440312 systemd[1564]: Startup finished in 191ms. Jan 13 20:42:38.440976 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:42:38.449800 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:42:38.881989 sshd[1556]: Accepted publickey for core from 172.24.4.1 port 34058 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:42:38.885413 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:42:38.896864 systemd-logind[1450]: New session 3 of user core. Jan 13 20:42:38.907737 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:42:39.059870 coreos-metadata[1432]: Jan 13 20:42:39.059 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:42:39.107971 coreos-metadata[1432]: Jan 13 20:42:39.107 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 20:42:39.124012 login[1527]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:42:39.133669 systemd-logind[1450]: New session 1 of user core. Jan 13 20:42:39.145739 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:42:39.357389 coreos-metadata[1432]: Jan 13 20:42:39.356 INFO Fetch successful Jan 13 20:42:39.357389 coreos-metadata[1432]: Jan 13 20:42:39.357 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:42:39.371879 coreos-metadata[1432]: Jan 13 20:42:39.371 INFO Fetch successful Jan 13 20:42:39.371879 coreos-metadata[1432]: Jan 13 20:42:39.371 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 20:42:39.387010 coreos-metadata[1432]: Jan 13 20:42:39.386 INFO Fetch successful Jan 13 20:42:39.387010 coreos-metadata[1432]: Jan 13 20:42:39.386 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 20:42:39.403305 coreos-metadata[1432]: Jan 13 20:42:39.403 INFO Fetch successful Jan 13 20:42:39.403305 coreos-metadata[1432]: Jan 13 20:42:39.403 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 20:42:39.418255 coreos-metadata[1432]: Jan 13 20:42:39.418 INFO Fetch successful Jan 13 20:42:39.418255 coreos-metadata[1432]: Jan 13 20:42:39.418 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 20:42:39.431483 coreos-metadata[1432]: Jan 13 20:42:39.431 INFO Fetch successful Jan 13 20:42:39.460217 systemd[1]: Started sshd@1-172.24.4.153:22-172.24.4.1:34064.service - OpenSSH per-connection server daemon (172.24.4.1:34064). Jan 13 20:42:39.481248 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:42:39.482922 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:42:39.520780 coreos-metadata[1498]: Jan 13 20:42:39.520 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:42:39.562427 coreos-metadata[1498]: Jan 13 20:42:39.562 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 20:42:39.578956 coreos-metadata[1498]: Jan 13 20:42:39.578 INFO Fetch successful Jan 13 20:42:39.578956 coreos-metadata[1498]: Jan 13 20:42:39.578 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:42:39.594312 coreos-metadata[1498]: Jan 13 20:42:39.594 INFO Fetch successful Jan 13 20:42:39.606246 unknown[1498]: wrote ssh authorized keys file for user: core Jan 13 20:42:39.660736 update-ssh-keys[1605]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:42:39.662297 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:42:39.665744 systemd[1]: Finished sshkeys.service. Jan 13 20:42:39.671832 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:42:39.672141 systemd[1]: Startup finished in 1.277s (kernel) + 16.064s (initrd) + 10.858s (userspace) = 28.200s. Jan 13 20:42:40.875419 sshd[1599]: Accepted publickey for core from 172.24.4.1 port 34064 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:42:40.878191 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:42:40.888150 systemd-logind[1450]: New session 4 of user core. Jan 13 20:42:40.897664 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:42:41.517609 sshd[1610]: Connection closed by 172.24.4.1 port 34064 Jan 13 20:42:41.518657 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Jan 13 20:42:41.536953 systemd[1]: sshd@1-172.24.4.153:22-172.24.4.1:34064.service: Deactivated successfully. Jan 13 20:42:41.540047 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:42:41.543817 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:42:41.551889 systemd[1]: Started sshd@2-172.24.4.153:22-172.24.4.1:34078.service - OpenSSH per-connection server daemon (172.24.4.1:34078). Jan 13 20:42:41.554779 systemd-logind[1450]: Removed session 4. Jan 13 20:42:42.948119 sshd[1615]: Accepted publickey for core from 172.24.4.1 port 34078 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:42:42.951762 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:42:42.963967 systemd-logind[1450]: New session 5 of user core. Jan 13 20:42:42.971736 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:42:43.826106 sshd[1617]: Connection closed by 172.24.4.1 port 34078 Jan 13 20:42:43.827236 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Jan 13 20:42:43.840411 systemd[1]: sshd@2-172.24.4.153:22-172.24.4.1:34078.service: Deactivated successfully. Jan 13 20:42:43.843986 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:42:43.848780 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:42:43.855936 systemd[1]: Started sshd@3-172.24.4.153:22-172.24.4.1:51988.service - OpenSSH per-connection server daemon (172.24.4.1:51988). Jan 13 20:42:43.859286 systemd-logind[1450]: Removed session 5. Jan 13 20:42:45.211914 sshd[1622]: Accepted publickey for core from 172.24.4.1 port 51988 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:42:45.214927 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:42:45.225266 systemd-logind[1450]: New session 6 of user core. Jan 13 20:42:45.238816 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:42:46.087021 sshd[1624]: Connection closed by 172.24.4.1 port 51988 Jan 13 20:42:46.088044 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Jan 13 20:42:46.102004 systemd[1]: sshd@3-172.24.4.153:22-172.24.4.1:51988.service: Deactivated successfully. Jan 13 20:42:46.105458 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:42:46.107382 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:42:46.117955 systemd[1]: Started sshd@4-172.24.4.153:22-172.24.4.1:51992.service - OpenSSH per-connection server daemon (172.24.4.1:51992). Jan 13 20:42:46.121479 systemd-logind[1450]: Removed session 6. Jan 13 20:42:46.862515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:42:46.869684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:42:47.005823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:42:47.013574 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:42:47.308719 kubelet[1639]: E0113 20:42:47.308424 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:42:47.317753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:42:47.318096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:42:47.439545 sshd[1629]: Accepted publickey for core from 172.24.4.1 port 51992 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:42:47.442629 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:42:47.454013 systemd-logind[1450]: New session 7 of user core. Jan 13 20:42:47.465703 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:42:48.046417 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:42:48.047037 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:42:48.084159 sudo[1648]: pam_unix(sudo:session): session closed for user root Jan 13 20:42:48.261197 sshd[1647]: Connection closed by 172.24.4.1 port 51992 Jan 13 20:42:48.262507 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Jan 13 20:42:48.275581 systemd[1]: sshd@4-172.24.4.153:22-172.24.4.1:51992.service: Deactivated successfully. Jan 13 20:42:48.279454 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:42:48.282761 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:42:48.290915 systemd[1]: Started sshd@5-172.24.4.153:22-172.24.4.1:51996.service - OpenSSH per-connection server daemon (172.24.4.1:51996). Jan 13 20:42:48.294426 systemd-logind[1450]: Removed session 7. Jan 13 20:42:49.668502 sshd[1653]: Accepted publickey for core from 172.24.4.1 port 51996 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:42:49.671015 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:42:49.679829 systemd-logind[1450]: New session 8 of user core. Jan 13 20:42:49.691748 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:42:50.101890 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:42:50.103229 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:42:50.110849 sudo[1657]: pam_unix(sudo:session): session closed for user root Jan 13 20:42:50.121797 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:42:50.123065 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:42:50.152972 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:42:50.207296 augenrules[1679]: No rules Jan 13 20:42:50.209248 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:42:50.209647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:42:50.212034 sudo[1656]: pam_unix(sudo:session): session closed for user root Jan 13 20:42:50.369733 sshd[1655]: Connection closed by 172.24.4.1 port 51996 Jan 13 20:42:50.374481 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jan 13 20:42:50.384831 systemd[1]: sshd@5-172.24.4.153:22-172.24.4.1:51996.service: Deactivated successfully. Jan 13 20:42:50.388573 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:42:50.390398 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:42:50.398915 systemd[1]: Started sshd@6-172.24.4.153:22-172.24.4.1:52004.service - OpenSSH per-connection server daemon (172.24.4.1:52004). Jan 13 20:42:50.402319 systemd-logind[1450]: Removed session 8. Jan 13 20:42:51.950630 sshd[1687]: Accepted publickey for core from 172.24.4.1 port 52004 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:42:51.953558 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:42:51.964975 systemd-logind[1450]: New session 9 of user core. Jan 13 20:42:51.979803 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:42:52.425720 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:42:52.426433 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:42:53.121572 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:42:53.138932 (dockerd)[1710]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:42:53.756229 dockerd[1710]: time="2025-01-13T20:42:53.756173646Z" level=info msg="Starting up" Jan 13 20:42:53.947432 dockerd[1710]: time="2025-01-13T20:42:53.947338136Z" level=info msg="Loading containers: start." Jan 13 20:42:54.141499 kernel: Initializing XFRM netlink socket Jan 13 20:42:54.273163 systemd-networkd[1376]: docker0: Link UP Jan 13 20:42:54.313757 dockerd[1710]: time="2025-01-13T20:42:54.313700252Z" level=info msg="Loading containers: done." Jan 13 20:42:54.345970 dockerd[1710]: time="2025-01-13T20:42:54.345816754Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:42:54.346251 dockerd[1710]: time="2025-01-13T20:42:54.345999303Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:42:54.346251 dockerd[1710]: time="2025-01-13T20:42:54.346220413Z" level=info msg="Daemon has completed initialization" Jan 13 20:42:54.415018 dockerd[1710]: time="2025-01-13T20:42:54.414674373Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:42:54.415520 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:42:56.898447 containerd[1466]: time="2025-01-13T20:42:56.895235305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:42:57.507021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:42:57.521529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:42:57.669859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:42:57.682787 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:42:58.039657 kubelet[1913]: E0113 20:42:58.039544 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:42:58.045843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:42:58.046101 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:42:58.061581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572500487.mount: Deactivated successfully. Jan 13 20:43:00.475090 containerd[1466]: time="2025-01-13T20:43:00.475014936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:00.476593 containerd[1466]: time="2025-01-13T20:43:00.476344934Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Jan 13 20:43:00.477707 containerd[1466]: time="2025-01-13T20:43:00.477639137Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:00.481055 containerd[1466]: time="2025-01-13T20:43:00.480990071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:00.482382 containerd[1466]: time="2025-01-13T20:43:00.482211981Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.584516055s" Jan 13 20:43:00.482382 containerd[1466]: time="2025-01-13T20:43:00.482244400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 20:43:00.505739 containerd[1466]: time="2025-01-13T20:43:00.505688527Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:43:03.025107 containerd[1466]: time="2025-01-13T20:43:03.025046710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:03.027374 containerd[1466]: time="2025-01-13T20:43:03.027275803Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Jan 13 20:43:03.028768 containerd[1466]: time="2025-01-13T20:43:03.028706483Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:03.032258 containerd[1466]: time="2025-01-13T20:43:03.032183403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:03.033511 containerd[1466]: time="2025-01-13T20:43:03.033369886Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.527428475s" Jan 13 20:43:03.033511 containerd[1466]: time="2025-01-13T20:43:03.033404736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 20:43:03.059131 containerd[1466]: time="2025-01-13T20:43:03.059092993Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:43:04.670884 containerd[1466]: time="2025-01-13T20:43:04.670351720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:04.672013 containerd[1466]: time="2025-01-13T20:43:04.671973512Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Jan 13 20:43:04.672920 containerd[1466]: time="2025-01-13T20:43:04.672887788Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:04.676993 containerd[1466]: time="2025-01-13T20:43:04.676925614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:04.678186 containerd[1466]: time="2025-01-13T20:43:04.678141723Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.619010813s" Jan 13 20:43:04.678237 containerd[1466]: time="2025-01-13T20:43:04.678186082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 20:43:04.702809 containerd[1466]: time="2025-01-13T20:43:04.702739811Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:43:06.029492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087126460.mount: Deactivated successfully. Jan 13 20:43:06.853049 containerd[1466]: time="2025-01-13T20:43:06.852900578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:06.854967 containerd[1466]: time="2025-01-13T20:43:06.854843838Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 13 20:43:06.857085 containerd[1466]: time="2025-01-13T20:43:06.856971098Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:06.862172 containerd[1466]: time="2025-01-13T20:43:06.862085018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:06.865099 containerd[1466]: time="2025-01-13T20:43:06.864561252Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.161759284s" Jan 13 20:43:06.865099 containerd[1466]: time="2025-01-13T20:43:06.864633497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 20:43:06.921190 containerd[1466]: time="2025-01-13T20:43:06.921036671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:43:07.562429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417111464.mount: Deactivated successfully. Jan 13 20:43:08.256299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:43:08.262833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:08.407097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:08.411909 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:43:08.663730 kubelet[2049]: E0113 20:43:08.663440 2049 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:43:08.669812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:43:08.670248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:43:09.149674 containerd[1466]: time="2025-01-13T20:43:09.149611762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:09.151239 containerd[1466]: time="2025-01-13T20:43:09.150961529Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 20:43:09.152361 containerd[1466]: time="2025-01-13T20:43:09.152273241Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:09.157188 containerd[1466]: time="2025-01-13T20:43:09.157123357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:09.158503 containerd[1466]: time="2025-01-13T20:43:09.158352294Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.237247966s" Jan 13 20:43:09.158503 containerd[1466]: time="2025-01-13T20:43:09.158391192Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:43:09.180979 containerd[1466]: time="2025-01-13T20:43:09.180908502Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:43:09.766650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190702746.mount: Deactivated successfully. Jan 13 20:43:09.774001 containerd[1466]: time="2025-01-13T20:43:09.773859967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:09.776069 containerd[1466]: time="2025-01-13T20:43:09.775863025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 20:43:09.777375 containerd[1466]: time="2025-01-13T20:43:09.777199926Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:09.782907 containerd[1466]: time="2025-01-13T20:43:09.782838563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:09.785945 containerd[1466]: time="2025-01-13T20:43:09.785633096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 604.601088ms" Jan 13 20:43:09.785945 containerd[1466]: time="2025-01-13T20:43:09.785701763Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:43:09.832624 containerd[1466]: time="2025-01-13T20:43:09.832428204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:43:10.510223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2480437516.mount: Deactivated successfully. Jan 13 20:43:13.909242 containerd[1466]: time="2025-01-13T20:43:13.907477521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:13.911465 containerd[1466]: time="2025-01-13T20:43:13.911079752Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 13 20:43:13.912906 containerd[1466]: time="2025-01-13T20:43:13.912832391Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:13.917039 containerd[1466]: time="2025-01-13T20:43:13.916960136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:13.919044 containerd[1466]: time="2025-01-13T20:43:13.918215177Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.085748507s" Jan 13 20:43:13.919044 containerd[1466]: time="2025-01-13T20:43:13.918251358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 20:43:17.499276 update_engine[1452]: I20250113 20:43:17.495900 1452 update_attempter.cc:509] Updating boot flags... Jan 13 20:43:17.584348 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2195) Jan 13 20:43:17.626514 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2192) Jan 13 20:43:18.437622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:18.450874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:18.504526 systemd[1]: Reloading requested from client PID 2208 ('systemctl') (unit session-9.scope)... Jan 13 20:43:18.504553 systemd[1]: Reloading... Jan 13 20:43:18.609385 zram_generator::config[2247]: No configuration found. Jan 13 20:43:18.770294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:43:18.854163 systemd[1]: Reloading finished in 349 ms. Jan 13 20:43:18.901735 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:43:18.901829 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:43:18.902172 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:18.904272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:19.014392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:19.016765 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:43:19.096314 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:43:19.096314 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:43:19.096314 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:43:19.096871 kubelet[2311]: I0113 20:43:19.096373 2311 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:43:19.531397 kubelet[2311]: I0113 20:43:19.531359 2311 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:43:19.531397 kubelet[2311]: I0113 20:43:19.531392 2311 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:43:19.531638 kubelet[2311]: I0113 20:43:19.531608 2311 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:43:19.548493 kubelet[2311]: I0113 20:43:19.548434 2311 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:43:19.551380 kubelet[2311]: E0113 20:43:19.551044 2311 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.573473 kubelet[2311]: I0113 20:43:19.573440 2311 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:43:19.573900 kubelet[2311]: I0113 20:43:19.573826 2311 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:43:19.574252 kubelet[2311]: I0113 20:43:19.573906 2311 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-b-778e6b4119.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:43:19.575825 kubelet[2311]: I0113 20:43:19.575798 2311 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:43:19.575876 kubelet[2311]: I0113 20:43:19.575837 2311 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:43:19.576074 kubelet[2311]: I0113 20:43:19.576041 2311 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:43:19.578058 kubelet[2311]: I0113 20:43:19.578029 2311 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:43:19.578121 kubelet[2311]: I0113 20:43:19.578069 2311 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:43:19.578121 kubelet[2311]: I0113 20:43:19.578107 2311 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:43:19.578175 kubelet[2311]: I0113 20:43:19.578134 2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:43:19.597623 kubelet[2311]: W0113 20:43:19.597208 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-b-778e6b4119.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.597623 kubelet[2311]: E0113 20:43:19.597353 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-b-778e6b4119.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.597623 kubelet[2311]: W0113 20:43:19.597472 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.597623 kubelet[2311]: E0113 20:43:19.597555 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.598311 kubelet[2311]: I0113 20:43:19.598262 2311 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:43:19.601647 kubelet[2311]: I0113 20:43:19.600569 2311 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:43:19.601647 kubelet[2311]: W0113 20:43:19.600622 2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:43:19.601647 kubelet[2311]: I0113 20:43:19.601439 2311 server.go:1264] "Started kubelet" Jan 13 20:43:19.601774 kubelet[2311]: I0113 20:43:19.601719 2311 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:43:19.603566 kubelet[2311]: I0113 20:43:19.603527 2311 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:43:19.607912 kubelet[2311]: I0113 20:43:19.607463 2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:43:19.607912 kubelet[2311]: I0113 20:43:19.607696 2311 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:43:19.607912 kubelet[2311]: E0113 20:43:19.607810 2311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.153:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.153:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-b-778e6b4119.novalocal.181a5b4ea2c92f6d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-b-778e6b4119.novalocal,UID:ci-4186-1-0-b-778e6b4119.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-b-778e6b4119.novalocal,},FirstTimestamp:2025-01-13 20:43:19.601418093 +0000 UTC m=+0.579349535,LastTimestamp:2025-01-13 20:43:19.601418093 +0000 UTC m=+0.579349535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-b-778e6b4119.novalocal,}" Jan 13 20:43:19.608770 kubelet[2311]: I0113 20:43:19.608669 2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:43:19.614557 kubelet[2311]: I0113 20:43:19.614048 2311 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:43:19.614689 kubelet[2311]: E0113 20:43:19.614624 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-b-778e6b4119.novalocal?timeout=10s\": dial tcp 172.24.4.153:6443: connect: connection refused" interval="200ms" Jan 13 20:43:19.615056 kubelet[2311]: I0113 20:43:19.615031 2311 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:43:19.616497 kubelet[2311]: W0113 20:43:19.616454 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.616791 kubelet[2311]: E0113 20:43:19.616617 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.616994 kubelet[2311]: I0113 20:43:19.616980 2311 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:43:19.617148 kubelet[2311]: I0113 20:43:19.617130 2311 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:43:19.618127 kubelet[2311]: I0113 20:43:19.618116 2311 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:43:19.618768 kubelet[2311]: E0113 20:43:19.618510 2311 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:43:19.619205 kubelet[2311]: I0113 20:43:19.619191 2311 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:43:19.633109 kubelet[2311]: I0113 20:43:19.633071 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:43:19.635297 kubelet[2311]: I0113 20:43:19.635018 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:43:19.635297 kubelet[2311]: I0113 20:43:19.635041 2311 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:43:19.635297 kubelet[2311]: I0113 20:43:19.635059 2311 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:43:19.635297 kubelet[2311]: E0113 20:43:19.635091 2311 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:43:19.639796 kubelet[2311]: W0113 20:43:19.639741 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.639981 kubelet[2311]: E0113 20:43:19.639874 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:19.655123 kubelet[2311]: I0113 20:43:19.655042 2311 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:43:19.655123 kubelet[2311]: I0113 20:43:19.655073 2311 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:43:19.655123 kubelet[2311]: I0113 20:43:19.655089 2311 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:43:19.660694 kubelet[2311]: I0113 20:43:19.660653 2311 policy_none.go:49] "None policy: Start" Jan 13 20:43:19.661312 kubelet[2311]: I0113 20:43:19.661201 2311 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:43:19.661312 kubelet[2311]: I0113 20:43:19.661223 2311 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:43:19.673491 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:43:19.685623 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:43:19.695970 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:43:19.697473 kubelet[2311]: I0113 20:43:19.697433 2311 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:43:19.698016 kubelet[2311]: I0113 20:43:19.697590 2311 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:43:19.698016 kubelet[2311]: I0113 20:43:19.697686 2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:43:19.700242 kubelet[2311]: E0113 20:43:19.700217 2311 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-b-778e6b4119.novalocal\" not found" Jan 13 20:43:19.717408 kubelet[2311]: I0113 20:43:19.717110 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.718143 kubelet[2311]: E0113 20:43:19.718100 2311 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.153:6443/api/v1/nodes\": dial tcp 172.24.4.153:6443: connect: connection refused" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.735372 kubelet[2311]: I0113 20:43:19.735265 2311 topology_manager.go:215] "Topology Admit Handler" podUID="bce7ac804fba7be56318e47b1ebb4c57" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.737604 kubelet[2311]: I0113 20:43:19.737442 2311 topology_manager.go:215] "Topology Admit Handler" podUID="c40379dd01b6de24d776525f4cce6074" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.739094 kubelet[2311]: I0113 20:43:19.738950 2311 topology_manager.go:215] "Topology Admit Handler" podUID="902b44417a4a387b0e96f5b18e49292b" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.751084 systemd[1]: Created slice kubepods-burstable-podbce7ac804fba7be56318e47b1ebb4c57.slice - libcontainer container kubepods-burstable-podbce7ac804fba7be56318e47b1ebb4c57.slice. Jan 13 20:43:19.780894 systemd[1]: Created slice kubepods-burstable-podc40379dd01b6de24d776525f4cce6074.slice - libcontainer container kubepods-burstable-podc40379dd01b6de24d776525f4cce6074.slice. Jan 13 20:43:19.790687 systemd[1]: Created slice kubepods-burstable-pod902b44417a4a387b0e96f5b18e49292b.slice - libcontainer container kubepods-burstable-pod902b44417a4a387b0e96f5b18e49292b.slice. Jan 13 20:43:19.816694 kubelet[2311]: E0113 20:43:19.816625 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-b-778e6b4119.novalocal?timeout=10s\": dial tcp 172.24.4.153:6443: connect: connection refused" interval="400ms" Jan 13 20:43:19.820097 kubelet[2311]: I0113 20:43:19.819943 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.824368 kubelet[2311]: I0113 20:43:19.820162 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bce7ac804fba7be56318e47b1ebb4c57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"bce7ac804fba7be56318e47b1ebb4c57\") " pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.824368 kubelet[2311]: I0113 20:43:19.822478 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bce7ac804fba7be56318e47b1ebb4c57-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"bce7ac804fba7be56318e47b1ebb4c57\") " pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.824368 kubelet[2311]: I0113 20:43:19.822552 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.824368 kubelet[2311]: I0113 20:43:19.822615 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.824692 kubelet[2311]: I0113 20:43:19.822687 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.824692 kubelet[2311]: I0113 20:43:19.823664 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.824692 kubelet[2311]: I0113 20:43:19.823794 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/902b44417a4a387b0e96f5b18e49292b-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"902b44417a4a387b0e96f5b18e49292b\") " pod="kube-system/kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.824692 kubelet[2311]: I0113 20:43:19.823898 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bce7ac804fba7be56318e47b1ebb4c57-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"bce7ac804fba7be56318e47b1ebb4c57\") " pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.921190 kubelet[2311]: I0113 20:43:19.921111 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:19.921808 kubelet[2311]: E0113 20:43:19.921705 2311 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.153:6443/api/v1/nodes\": dial tcp 172.24.4.153:6443: connect: connection refused" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:20.075733 containerd[1466]: time="2025-01-13T20:43:20.075384909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal,Uid:bce7ac804fba7be56318e47b1ebb4c57,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:20.088121 containerd[1466]: time="2025-01-13T20:43:20.088000089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal,Uid:c40379dd01b6de24d776525f4cce6074,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:20.099295 containerd[1466]: time="2025-01-13T20:43:20.099167188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal,Uid:902b44417a4a387b0e96f5b18e49292b,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:20.218933 kubelet[2311]: E0113 20:43:20.218829 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-b-778e6b4119.novalocal?timeout=10s\": dial tcp 172.24.4.153:6443: connect: connection refused" interval="800ms" Jan 13 20:43:20.325704 kubelet[2311]: I0113 20:43:20.325585 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:20.326917 kubelet[2311]: E0113 20:43:20.326405 2311 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.153:6443/api/v1/nodes\": dial tcp 172.24.4.153:6443: connect: connection refused" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:20.663934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3828253084.mount: Deactivated successfully. Jan 13 20:43:20.673052 containerd[1466]: time="2025-01-13T20:43:20.672872874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:20.676805 containerd[1466]: time="2025-01-13T20:43:20.676727484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 20:43:20.679308 containerd[1466]: time="2025-01-13T20:43:20.679103144Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:20.682595 containerd[1466]: time="2025-01-13T20:43:20.682546015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:20.685081 containerd[1466]: time="2025-01-13T20:43:20.684986621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:43:20.686616 containerd[1466]: time="2025-01-13T20:43:20.686419191Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:43:20.686616 containerd[1466]: time="2025-01-13T20:43:20.686498805Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:20.693434 containerd[1466]: time="2025-01-13T20:43:20.693399615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:43:20.693861 kubelet[2311]: W0113 20:43:20.693769 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-b-778e6b4119.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:20.693861 kubelet[2311]: E0113 20:43:20.693838 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-b-778e6b4119.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:20.695615 containerd[1466]: time="2025-01-13T20:43:20.695430626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 619.869335ms" Jan 13 20:43:20.700755 containerd[1466]: time="2025-01-13T20:43:20.700667739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.460328ms" Jan 13 20:43:20.706756 containerd[1466]: time="2025-01-13T20:43:20.706673393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 607.337487ms" Jan 13 20:43:20.741416 kubelet[2311]: W0113 20:43:20.741079 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:20.741416 kubelet[2311]: E0113 20:43:20.741213 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:20.893426 kubelet[2311]: W0113 20:43:20.893021 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:20.893426 kubelet[2311]: E0113 20:43:20.893164 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:20.911735 containerd[1466]: time="2025-01-13T20:43:20.911597798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:20.911735 containerd[1466]: time="2025-01-13T20:43:20.911659178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:20.912778 containerd[1466]: time="2025-01-13T20:43:20.912577369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:20.912778 containerd[1466]: time="2025-01-13T20:43:20.912716338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:20.913645 containerd[1466]: time="2025-01-13T20:43:20.913591005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:20.913777 containerd[1466]: time="2025-01-13T20:43:20.913725126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:20.913866 containerd[1466]: time="2025-01-13T20:43:20.913765044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:20.914547 containerd[1466]: time="2025-01-13T20:43:20.914387181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:20.937891 containerd[1466]: time="2025-01-13T20:43:20.937599262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:20.937891 containerd[1466]: time="2025-01-13T20:43:20.937671262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:20.937891 containerd[1466]: time="2025-01-13T20:43:20.937697463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:20.939672 containerd[1466]: time="2025-01-13T20:43:20.937804902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:20.945802 systemd[1]: Started cri-containerd-50a0b464734b44cf2f1e7582b18ee7336559ad40fd9ca776f7ee40e165dd21b7.scope - libcontainer container 50a0b464734b44cf2f1e7582b18ee7336559ad40fd9ca776f7ee40e165dd21b7. Jan 13 20:43:20.964879 systemd[1]: Started cri-containerd-d03ab737e561cdcd116d44e2bd163e456c8570c5250d9e0db36e02e178e0662e.scope - libcontainer container d03ab737e561cdcd116d44e2bd163e456c8570c5250d9e0db36e02e178e0662e. Jan 13 20:43:20.975537 kubelet[2311]: W0113 20:43:20.975136 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:20.975537 kubelet[2311]: E0113 20:43:20.975222 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused Jan 13 20:43:20.981539 systemd[1]: Started cri-containerd-207056d76d89380e83c741f037017402f44f76c8356f29ac77a8ba0e120361ad.scope - libcontainer container 207056d76d89380e83c741f037017402f44f76c8356f29ac77a8ba0e120361ad. Jan 13 20:43:21.022396 kubelet[2311]: E0113 20:43:21.021412 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-b-778e6b4119.novalocal?timeout=10s\": dial tcp 172.24.4.153:6443: connect: connection refused" interval="1.6s" Jan 13 20:43:21.022944 containerd[1466]: time="2025-01-13T20:43:21.022914637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal,Uid:c40379dd01b6de24d776525f4cce6074,Namespace:kube-system,Attempt:0,} returns sandbox id \"d03ab737e561cdcd116d44e2bd163e456c8570c5250d9e0db36e02e178e0662e\"" Jan 13 20:43:21.034033 containerd[1466]: time="2025-01-13T20:43:21.034000725Z" level=info msg="CreateContainer within sandbox \"d03ab737e561cdcd116d44e2bd163e456c8570c5250d9e0db36e02e178e0662e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:43:21.049208 containerd[1466]: time="2025-01-13T20:43:21.049161367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal,Uid:902b44417a4a387b0e96f5b18e49292b,Namespace:kube-system,Attempt:0,} returns sandbox id \"50a0b464734b44cf2f1e7582b18ee7336559ad40fd9ca776f7ee40e165dd21b7\"" Jan 13 20:43:21.053093 containerd[1466]: time="2025-01-13T20:43:21.053062465Z" level=info msg="CreateContainer within sandbox \"50a0b464734b44cf2f1e7582b18ee7336559ad40fd9ca776f7ee40e165dd21b7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:43:21.057749 containerd[1466]: time="2025-01-13T20:43:21.057700110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal,Uid:bce7ac804fba7be56318e47b1ebb4c57,Namespace:kube-system,Attempt:0,} returns sandbox id \"207056d76d89380e83c741f037017402f44f76c8356f29ac77a8ba0e120361ad\"" Jan 13 20:43:21.061807 containerd[1466]: time="2025-01-13T20:43:21.061776888Z" level=info msg="CreateContainer within sandbox \"207056d76d89380e83c741f037017402f44f76c8356f29ac77a8ba0e120361ad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:43:21.064511 containerd[1466]: time="2025-01-13T20:43:21.064469405Z" level=info msg="CreateContainer within sandbox \"d03ab737e561cdcd116d44e2bd163e456c8570c5250d9e0db36e02e178e0662e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94d12c503fb6e1c3848bab978a7d21739d8278d517313a0c07b3e92086f3cde1\"" Jan 13 20:43:21.065239 containerd[1466]: time="2025-01-13T20:43:21.065190723Z" level=info msg="StartContainer for \"94d12c503fb6e1c3848bab978a7d21739d8278d517313a0c07b3e92086f3cde1\"" Jan 13 20:43:21.098492 containerd[1466]: time="2025-01-13T20:43:21.098416263Z" level=info msg="CreateContainer within sandbox \"207056d76d89380e83c741f037017402f44f76c8356f29ac77a8ba0e120361ad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"528585e31b20edf2c45744f37a5e236122eba1f9dba81cf93219a1870e0b34b5\"" Jan 13 20:43:21.101486 containerd[1466]: time="2025-01-13T20:43:21.099046113Z" level=info msg="StartContainer for \"528585e31b20edf2c45744f37a5e236122eba1f9dba81cf93219a1870e0b34b5\"" Jan 13 20:43:21.100498 systemd[1]: Started cri-containerd-94d12c503fb6e1c3848bab978a7d21739d8278d517313a0c07b3e92086f3cde1.scope - libcontainer container 94d12c503fb6e1c3848bab978a7d21739d8278d517313a0c07b3e92086f3cde1. Jan 13 20:43:21.101886 containerd[1466]: time="2025-01-13T20:43:21.101738680Z" level=info msg="CreateContainer within sandbox \"50a0b464734b44cf2f1e7582b18ee7336559ad40fd9ca776f7ee40e165dd21b7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ab0c3aa09d416ff886742db1135d16b7f856b8d84bb800e1c812591b18943ba\"" Jan 13 20:43:21.102724 containerd[1466]: time="2025-01-13T20:43:21.102705213Z" level=info msg="StartContainer for \"5ab0c3aa09d416ff886742db1135d16b7f856b8d84bb800e1c812591b18943ba\"" Jan 13 20:43:21.131769 kubelet[2311]: I0113 20:43:21.131588 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:21.133436 kubelet[2311]: E0113 20:43:21.132708 2311 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.153:6443/api/v1/nodes\": dial tcp 172.24.4.153:6443: connect: connection refused" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:21.153143 systemd[1]: Started cri-containerd-528585e31b20edf2c45744f37a5e236122eba1f9dba81cf93219a1870e0b34b5.scope - libcontainer container 528585e31b20edf2c45744f37a5e236122eba1f9dba81cf93219a1870e0b34b5. Jan 13 20:43:21.160528 systemd[1]: Started cri-containerd-5ab0c3aa09d416ff886742db1135d16b7f856b8d84bb800e1c812591b18943ba.scope - libcontainer container 5ab0c3aa09d416ff886742db1135d16b7f856b8d84bb800e1c812591b18943ba. Jan 13 20:43:21.190780 containerd[1466]: time="2025-01-13T20:43:21.190173754Z" level=info msg="StartContainer for \"94d12c503fb6e1c3848bab978a7d21739d8278d517313a0c07b3e92086f3cde1\" returns successfully" Jan 13 20:43:21.239270 containerd[1466]: time="2025-01-13T20:43:21.239205738Z" level=info msg="StartContainer for \"528585e31b20edf2c45744f37a5e236122eba1f9dba81cf93219a1870e0b34b5\" returns successfully" Jan 13 20:43:21.270822 containerd[1466]: time="2025-01-13T20:43:21.270786362Z" level=info msg="StartContainer for \"5ab0c3aa09d416ff886742db1135d16b7f856b8d84bb800e1c812591b18943ba\" returns successfully" Jan 13 20:43:22.735766 kubelet[2311]: I0113 20:43:22.735346 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:22.997499 kubelet[2311]: E0113 20:43:22.997386 2311 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-b-778e6b4119.novalocal\" not found" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:23.109928 kubelet[2311]: I0113 20:43:23.109853 2311 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:23.588720 kubelet[2311]: I0113 20:43:23.588407 2311 apiserver.go:52] "Watching apiserver" Jan 13 20:43:23.615237 kubelet[2311]: I0113 20:43:23.615205 2311 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:43:23.729641 kubelet[2311]: E0113 20:43:23.729104 2311 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:25.747160 systemd[1]: Reloading requested from client PID 2587 ('systemctl') (unit session-9.scope)... Jan 13 20:43:25.747510 systemd[1]: Reloading... Jan 13 20:43:25.903387 zram_generator::config[2629]: No configuration found. Jan 13 20:43:26.078829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:43:26.189732 systemd[1]: Reloading finished in 441 ms. Jan 13 20:43:26.242359 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:26.255289 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:43:26.255506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:26.255552 systemd[1]: kubelet.service: Consumed 1.072s CPU time, 113.9M memory peak, 0B memory swap peak. Jan 13 20:43:26.262808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:43:26.479417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:43:26.490592 (kubelet)[2690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:43:26.621489 kubelet[2690]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:43:26.621489 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:43:26.621489 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:43:26.621489 kubelet[2690]: I0113 20:43:26.621505 2690 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:43:26.627597 kubelet[2690]: I0113 20:43:26.627543 2690 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:43:26.627597 kubelet[2690]: I0113 20:43:26.627565 2690 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:43:26.627792 kubelet[2690]: I0113 20:43:26.627756 2690 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:43:26.629527 kubelet[2690]: I0113 20:43:26.629506 2690 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:43:26.633066 kubelet[2690]: I0113 20:43:26.632427 2690 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:43:26.648926 kubelet[2690]: I0113 20:43:26.647595 2690 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:43:26.648926 kubelet[2690]: I0113 20:43:26.647843 2690 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:43:26.648926 kubelet[2690]: I0113 20:43:26.647884 2690 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-b-778e6b4119.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:43:26.648926 kubelet[2690]: I0113 20:43:26.648139 2690 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:43:26.649254 kubelet[2690]: I0113 20:43:26.648151 2690 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:43:26.649254 kubelet[2690]: I0113 20:43:26.648199 2690 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:43:26.649254 kubelet[2690]: I0113 20:43:26.648307 2690 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:43:26.649254 kubelet[2690]: I0113 20:43:26.648339 2690 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:43:26.649254 kubelet[2690]: I0113 20:43:26.648373 2690 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:43:26.649254 kubelet[2690]: I0113 20:43:26.648390 2690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:43:26.649470 kubelet[2690]: I0113 20:43:26.649318 2690 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:43:26.649618 kubelet[2690]: I0113 20:43:26.649594 2690 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:43:26.650064 kubelet[2690]: I0113 20:43:26.650045 2690 server.go:1264] "Started kubelet" Jan 13 20:43:26.653975 kubelet[2690]: I0113 20:43:26.652844 2690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:43:26.661741 kubelet[2690]: I0113 20:43:26.661697 2690 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:43:26.662805 kubelet[2690]: I0113 20:43:26.662782 2690 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:43:26.664054 kubelet[2690]: I0113 20:43:26.663999 2690 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:43:26.664430 kubelet[2690]: I0113 20:43:26.664417 2690 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:43:26.666531 kubelet[2690]: I0113 20:43:26.666491 2690 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:43:26.674692 kubelet[2690]: I0113 20:43:26.674653 2690 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:43:26.675929 kubelet[2690]: I0113 20:43:26.674822 2690 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:43:26.677726 kubelet[2690]: I0113 20:43:26.676968 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:43:26.677915 kubelet[2690]: I0113 20:43:26.677871 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:43:26.677915 kubelet[2690]: I0113 20:43:26.677902 2690 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:43:26.678020 kubelet[2690]: I0113 20:43:26.677919 2690 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:43:26.678020 kubelet[2690]: E0113 20:43:26.677957 2690 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:43:26.689193 kubelet[2690]: I0113 20:43:26.689158 2690 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:43:26.689518 kubelet[2690]: I0113 20:43:26.689496 2690 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:43:26.697777 kubelet[2690]: E0113 20:43:26.695969 2690 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:43:26.697777 kubelet[2690]: I0113 20:43:26.696263 2690 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:43:26.721194 sudo[2718]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:43:26.722172 sudo[2718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:43:26.758308 kubelet[2690]: I0113 20:43:26.758164 2690 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:43:26.758308 kubelet[2690]: I0113 20:43:26.758183 2690 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:43:26.758308 kubelet[2690]: I0113 20:43:26.758200 2690 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:43:26.758538 kubelet[2690]: I0113 20:43:26.758394 2690 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:43:26.758538 kubelet[2690]: I0113 20:43:26.758408 2690 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:43:26.758538 kubelet[2690]: I0113 20:43:26.758426 2690 policy_none.go:49] "None policy: Start" Jan 13 20:43:26.759702 kubelet[2690]: I0113 20:43:26.758996 2690 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:43:26.759702 kubelet[2690]: I0113 20:43:26.759018 2690 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:43:26.759702 kubelet[2690]: I0113 20:43:26.759133 2690 state_mem.go:75] "Updated machine memory state" Jan 13 20:43:26.763923 kubelet[2690]: I0113 20:43:26.763650 2690 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:43:26.765066 kubelet[2690]: I0113 20:43:26.764418 2690 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:43:26.765066 kubelet[2690]: I0113 20:43:26.764514 2690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:43:26.776654 kubelet[2690]: I0113 20:43:26.776618 2690 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.780169 kubelet[2690]: I0113 20:43:26.779744 2690 topology_manager.go:215] "Topology Admit Handler" podUID="902b44417a4a387b0e96f5b18e49292b" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.780169 kubelet[2690]: I0113 20:43:26.779848 2690 topology_manager.go:215] "Topology Admit Handler" podUID="bce7ac804fba7be56318e47b1ebb4c57" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.780169 kubelet[2690]: I0113 20:43:26.779907 2690 topology_manager.go:215] "Topology Admit Handler" podUID="c40379dd01b6de24d776525f4cce6074" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.799041 kubelet[2690]: W0113 20:43:26.798671 2690 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:43:26.809369 kubelet[2690]: I0113 20:43:26.807129 2690 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.809369 kubelet[2690]: I0113 20:43:26.807205 2690 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.809369 kubelet[2690]: W0113 20:43:26.807490 2690 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:43:26.809369 kubelet[2690]: W0113 20:43:26.808357 2690 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:43:26.875907 kubelet[2690]: I0113 20:43:26.875855 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bce7ac804fba7be56318e47b1ebb4c57-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"bce7ac804fba7be56318e47b1ebb4c57\") " pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.875907 kubelet[2690]: I0113 20:43:26.875917 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bce7ac804fba7be56318e47b1ebb4c57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"bce7ac804fba7be56318e47b1ebb4c57\") " pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.876132 kubelet[2690]: I0113 20:43:26.876069 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.876166 kubelet[2690]: I0113 20:43:26.876130 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.876166 kubelet[2690]: I0113 20:43:26.876160 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/902b44417a4a387b0e96f5b18e49292b-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"902b44417a4a387b0e96f5b18e49292b\") " pod="kube-system/kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.876238 kubelet[2690]: I0113 20:43:26.876178 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bce7ac804fba7be56318e47b1ebb4c57-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"bce7ac804fba7be56318e47b1ebb4c57\") " pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.876238 kubelet[2690]: I0113 20:43:26.876229 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.876294 kubelet[2690]: I0113 20:43:26.876248 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:26.876956 kubelet[2690]: I0113 20:43:26.876300 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c40379dd01b6de24d776525f4cce6074-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal\" (UID: \"c40379dd01b6de24d776525f4cce6074\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:27.327572 sudo[2718]: pam_unix(sudo:session): session closed for user root Jan 13 20:43:27.659310 kubelet[2690]: I0113 20:43:27.659122 2690 apiserver.go:52] "Watching apiserver" Jan 13 20:43:27.675484 kubelet[2690]: I0113 20:43:27.675461 2690 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:43:27.747441 kubelet[2690]: W0113 20:43:27.747405 2690 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:43:27.747621 kubelet[2690]: E0113 20:43:27.747467 2690 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" Jan 13 20:43:27.764625 kubelet[2690]: I0113 20:43:27.764553 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-b-778e6b4119.novalocal" podStartSLOduration=1.764538564 podStartE2EDuration="1.764538564s" podCreationTimestamp="2025-01-13 20:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:43:27.764207467 +0000 UTC m=+1.267723270" watchObservedRunningTime="2025-01-13 20:43:27.764538564 +0000 UTC m=+1.268054357" Jan 13 20:43:27.905899 kubelet[2690]: I0113 20:43:27.905068 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-b-778e6b4119.novalocal" podStartSLOduration=1.905045855 podStartE2EDuration="1.905045855s" podCreationTimestamp="2025-01-13 20:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:43:27.80031484 +0000 UTC m=+1.303830643" watchObservedRunningTime="2025-01-13 20:43:27.905045855 +0000 UTC m=+1.408561648" Jan 13 20:43:27.905899 kubelet[2690]: I0113 20:43:27.905366 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-b-778e6b4119.novalocal" podStartSLOduration=1.905359348 podStartE2EDuration="1.905359348s" podCreationTimestamp="2025-01-13 20:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:43:27.905030926 +0000 UTC m=+1.408546729" watchObservedRunningTime="2025-01-13 20:43:27.905359348 +0000 UTC m=+1.408875141" Jan 13 20:43:29.265528 sudo[1690]: pam_unix(sudo:session): session closed for user root Jan 13 20:43:29.544386 sshd[1689]: Connection closed by 172.24.4.1 port 52004 Jan 13 20:43:29.545213 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jan 13 20:43:29.551434 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:43:29.553179 systemd[1]: sshd@6-172.24.4.153:22-172.24.4.1:52004.service: Deactivated successfully. Jan 13 20:43:29.558241 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:43:29.558837 systemd[1]: session-9.scope: Consumed 7.699s CPU time, 190.2M memory peak, 0B memory swap peak. Jan 13 20:43:29.562624 systemd-logind[1450]: Removed session 9. Jan 13 20:43:38.440310 kubelet[2690]: I0113 20:43:38.440204 2690 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:43:38.444170 containerd[1466]: time="2025-01-13T20:43:38.443915238Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:43:38.445256 kubelet[2690]: I0113 20:43:38.445135 2690 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:43:39.302868 kubelet[2690]: I0113 20:43:39.302038 2690 topology_manager.go:215] "Topology Admit Handler" podUID="0af46be1-042b-40ce-9716-2fcccf2c38cf" podNamespace="kube-system" podName="kube-proxy-46rl4" Jan 13 20:43:39.312024 kubelet[2690]: I0113 20:43:39.311980 2690 topology_manager.go:215] "Topology Admit Handler" podUID="374419b6-8645-485c-9a51-3b66501bb499" podNamespace="kube-system" podName="cilium-6vfwt" Jan 13 20:43:39.318206 systemd[1]: Created slice kubepods-besteffort-pod0af46be1_042b_40ce_9716_2fcccf2c38cf.slice - libcontainer container kubepods-besteffort-pod0af46be1_042b_40ce_9716_2fcccf2c38cf.slice. Jan 13 20:43:39.332969 systemd[1]: Created slice kubepods-burstable-pod374419b6_8645_485c_9a51_3b66501bb499.slice - libcontainer container kubepods-burstable-pod374419b6_8645_485c_9a51_3b66501bb499.slice. Jan 13 20:43:39.359644 kubelet[2690]: I0113 20:43:39.359613 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-bpf-maps\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.359861 kubelet[2690]: I0113 20:43:39.359842 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/374419b6-8645-485c-9a51-3b66501bb499-clustermesh-secrets\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.360069 kubelet[2690]: I0113 20:43:39.360038 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/374419b6-8645-485c-9a51-3b66501bb499-cilium-config-path\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.360206 kubelet[2690]: I0113 20:43:39.360192 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-hostproc\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.360351 kubelet[2690]: I0113 20:43:39.360318 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-etc-cni-netd\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.360466 kubelet[2690]: I0113 20:43:39.360453 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-xtables-lock\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.360587 kubelet[2690]: I0113 20:43:39.360572 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-host-proc-sys-kernel\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.360703 kubelet[2690]: I0113 20:43:39.360687 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q872\" (UniqueName: \"kubernetes.io/projected/0af46be1-042b-40ce-9716-2fcccf2c38cf-kube-api-access-7q872\") pod \"kube-proxy-46rl4\" (UID: \"0af46be1-042b-40ce-9716-2fcccf2c38cf\") " pod="kube-system/kube-proxy-46rl4" Jan 13 20:43:39.360822 kubelet[2690]: I0113 20:43:39.360809 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cilium-run\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.360906 kubelet[2690]: I0113 20:43:39.360894 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cni-path\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.361010 kubelet[2690]: I0113 20:43:39.360998 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-lib-modules\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.361127 kubelet[2690]: I0113 20:43:39.361113 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-host-proc-sys-net\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.361249 kubelet[2690]: I0113 20:43:39.361228 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0af46be1-042b-40ce-9716-2fcccf2c38cf-kube-proxy\") pod \"kube-proxy-46rl4\" (UID: \"0af46be1-042b-40ce-9716-2fcccf2c38cf\") " pod="kube-system/kube-proxy-46rl4" Jan 13 20:43:39.361374 kubelet[2690]: I0113 20:43:39.361361 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0af46be1-042b-40ce-9716-2fcccf2c38cf-lib-modules\") pod \"kube-proxy-46rl4\" (UID: \"0af46be1-042b-40ce-9716-2fcccf2c38cf\") " pod="kube-system/kube-proxy-46rl4" Jan 13 20:43:39.361593 kubelet[2690]: I0113 20:43:39.361444 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/374419b6-8645-485c-9a51-3b66501bb499-hubble-tls\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.361593 kubelet[2690]: I0113 20:43:39.361465 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhnk2\" (UniqueName: \"kubernetes.io/projected/374419b6-8645-485c-9a51-3b66501bb499-kube-api-access-mhnk2\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.361593 kubelet[2690]: I0113 20:43:39.361484 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cilium-cgroup\") pod \"cilium-6vfwt\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " pod="kube-system/cilium-6vfwt" Jan 13 20:43:39.361593 kubelet[2690]: I0113 20:43:39.361501 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0af46be1-042b-40ce-9716-2fcccf2c38cf-xtables-lock\") pod \"kube-proxy-46rl4\" (UID: \"0af46be1-042b-40ce-9716-2fcccf2c38cf\") " pod="kube-system/kube-proxy-46rl4" Jan 13 20:43:39.501136 kubelet[2690]: I0113 20:43:39.501093 2690 topology_manager.go:215] "Topology Admit Handler" podUID="8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f" podNamespace="kube-system" podName="cilium-operator-599987898-6ps6g" Jan 13 20:43:39.510360 systemd[1]: Created slice kubepods-besteffort-pod8b0d85ce_5a3d_48e0_9ae8_d4c88a98997f.slice - libcontainer container kubepods-besteffort-pod8b0d85ce_5a3d_48e0_9ae8_d4c88a98997f.slice. Jan 13 20:43:39.564158 kubelet[2690]: I0113 20:43:39.563960 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f-cilium-config-path\") pod \"cilium-operator-599987898-6ps6g\" (UID: \"8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f\") " pod="kube-system/cilium-operator-599987898-6ps6g" Jan 13 20:43:39.564158 kubelet[2690]: I0113 20:43:39.564040 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2vb\" (UniqueName: \"kubernetes.io/projected/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f-kube-api-access-ql2vb\") pod \"cilium-operator-599987898-6ps6g\" (UID: \"8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f\") " pod="kube-system/cilium-operator-599987898-6ps6g" Jan 13 20:43:39.628951 containerd[1466]: time="2025-01-13T20:43:39.628783145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46rl4,Uid:0af46be1-042b-40ce-9716-2fcccf2c38cf,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:39.640378 containerd[1466]: time="2025-01-13T20:43:39.640118750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vfwt,Uid:374419b6-8645-485c-9a51-3b66501bb499,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:39.672025 containerd[1466]: time="2025-01-13T20:43:39.671701654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:39.672025 containerd[1466]: time="2025-01-13T20:43:39.671761438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:39.672025 containerd[1466]: time="2025-01-13T20:43:39.671781256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:39.672025 containerd[1466]: time="2025-01-13T20:43:39.671863082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:39.703771 containerd[1466]: time="2025-01-13T20:43:39.703632082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:39.703771 containerd[1466]: time="2025-01-13T20:43:39.703695743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:39.703771 containerd[1466]: time="2025-01-13T20:43:39.703710821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:39.704522 systemd[1]: Started cri-containerd-b657f4b24844719d4418826ef30101da7de3646099df8fa0b01372b548a16214.scope - libcontainer container b657f4b24844719d4418826ef30101da7de3646099df8fa0b01372b548a16214. Jan 13 20:43:39.705073 containerd[1466]: time="2025-01-13T20:43:39.704433589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:39.729489 systemd[1]: Started cri-containerd-c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53.scope - libcontainer container c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53. Jan 13 20:43:39.743708 containerd[1466]: time="2025-01-13T20:43:39.743548909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46rl4,Uid:0af46be1-042b-40ce-9716-2fcccf2c38cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b657f4b24844719d4418826ef30101da7de3646099df8fa0b01372b548a16214\"" Jan 13 20:43:39.751672 containerd[1466]: time="2025-01-13T20:43:39.751636955Z" level=info msg="CreateContainer within sandbox \"b657f4b24844719d4418826ef30101da7de3646099df8fa0b01372b548a16214\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:43:39.767291 containerd[1466]: time="2025-01-13T20:43:39.767231356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vfwt,Uid:374419b6-8645-485c-9a51-3b66501bb499,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\"" Jan 13 20:43:39.769342 containerd[1466]: time="2025-01-13T20:43:39.769292595Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:43:39.783086 containerd[1466]: time="2025-01-13T20:43:39.783032883Z" level=info msg="CreateContainer within sandbox \"b657f4b24844719d4418826ef30101da7de3646099df8fa0b01372b548a16214\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3559a9d7522e1d92883836ae9052b79517ded6da94060024c752bb173816cbd4\"" Jan 13 20:43:39.784536 containerd[1466]: time="2025-01-13T20:43:39.784502615Z" level=info msg="StartContainer for \"3559a9d7522e1d92883836ae9052b79517ded6da94060024c752bb173816cbd4\"" Jan 13 20:43:39.813474 systemd[1]: Started cri-containerd-3559a9d7522e1d92883836ae9052b79517ded6da94060024c752bb173816cbd4.scope - libcontainer container 3559a9d7522e1d92883836ae9052b79517ded6da94060024c752bb173816cbd4. Jan 13 20:43:39.814899 containerd[1466]: time="2025-01-13T20:43:39.814559621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6ps6g,Uid:8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:39.854597 containerd[1466]: time="2025-01-13T20:43:39.854233416Z" level=info msg="StartContainer for \"3559a9d7522e1d92883836ae9052b79517ded6da94060024c752bb173816cbd4\" returns successfully" Jan 13 20:43:39.855617 containerd[1466]: time="2025-01-13T20:43:39.855234062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:43:39.857975 containerd[1466]: time="2025-01-13T20:43:39.857396193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:43:39.857975 containerd[1466]: time="2025-01-13T20:43:39.857428425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:39.857975 containerd[1466]: time="2025-01-13T20:43:39.857528455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:43:39.878510 systemd[1]: Started cri-containerd-e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c.scope - libcontainer container e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c. Jan 13 20:43:39.947375 containerd[1466]: time="2025-01-13T20:43:39.947298761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6ps6g,Uid:8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\"" Jan 13 20:43:40.809604 kubelet[2690]: I0113 20:43:40.808437 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-46rl4" podStartSLOduration=1.80840628 podStartE2EDuration="1.80840628s" podCreationTimestamp="2025-01-13 20:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:43:40.808116388 +0000 UTC m=+14.311632241" watchObservedRunningTime="2025-01-13 20:43:40.80840628 +0000 UTC m=+14.311922123" Jan 13 20:43:48.824019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025348556.mount: Deactivated successfully. Jan 13 20:43:51.226410 containerd[1466]: time="2025-01-13T20:43:51.226316318Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:51.227865 containerd[1466]: time="2025-01-13T20:43:51.227824942Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734727" Jan 13 20:43:51.228819 containerd[1466]: time="2025-01-13T20:43:51.228771378Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:43:51.231501 containerd[1466]: time="2025-01-13T20:43:51.230738552Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.461413275s" Jan 13 20:43:51.231501 containerd[1466]: time="2025-01-13T20:43:51.230780562Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:43:51.234901 containerd[1466]: time="2025-01-13T20:43:51.234772870Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:43:51.236205 containerd[1466]: time="2025-01-13T20:43:51.236019597Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:43:51.259959 containerd[1466]: time="2025-01-13T20:43:51.259909574Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\"" Jan 13 20:43:51.260658 containerd[1466]: time="2025-01-13T20:43:51.260633238Z" level=info msg="StartContainer for \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\"" Jan 13 20:43:51.302767 systemd[1]: Started cri-containerd-f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff.scope - libcontainer container f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff. Jan 13 20:43:51.340236 containerd[1466]: time="2025-01-13T20:43:51.340186034Z" level=info msg="StartContainer for \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\" returns successfully" Jan 13 20:43:51.348968 systemd[1]: cri-containerd-f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff.scope: Deactivated successfully. Jan 13 20:43:52.254902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff-rootfs.mount: Deactivated successfully. Jan 13 20:43:52.701257 containerd[1466]: time="2025-01-13T20:43:52.701063862Z" level=info msg="shim disconnected" id=f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff namespace=k8s.io Jan 13 20:43:52.702999 containerd[1466]: time="2025-01-13T20:43:52.702906870Z" level=warning msg="cleaning up after shim disconnected" id=f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff namespace=k8s.io Jan 13 20:43:52.702999 containerd[1466]: time="2025-01-13T20:43:52.702962005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:43:52.735634 containerd[1466]: time="2025-01-13T20:43:52.735505795Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:43:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:43:52.834136 containerd[1466]: time="2025-01-13T20:43:52.833683185Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:43:52.902163 containerd[1466]: time="2025-01-13T20:43:52.902074968Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\"" Jan 13 20:43:52.903864 containerd[1466]: time="2025-01-13T20:43:52.903830439Z" level=info msg="StartContainer for \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\"" Jan 13 20:43:52.957356 systemd[1]: Started cri-containerd-21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d.scope - libcontainer container 21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d. Jan 13 20:43:52.989627 containerd[1466]: time="2025-01-13T20:43:52.989573220Z" level=info msg="StartContainer for \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\" returns successfully" Jan 13 20:43:53.000009 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:43:53.000513 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:43:53.000594 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:43:53.005861 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:43:53.007789 systemd[1]: cri-containerd-21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d.scope: Deactivated successfully. Jan 13 20:43:53.034288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:43:53.048828 containerd[1466]: time="2025-01-13T20:43:53.048770554Z" level=info msg="shim disconnected" id=21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d namespace=k8s.io Jan 13 20:43:53.048828 containerd[1466]: time="2025-01-13T20:43:53.048823365Z" level=warning msg="cleaning up after shim disconnected" id=21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d namespace=k8s.io Jan 13 20:43:53.049050 containerd[1466]: time="2025-01-13T20:43:53.048834125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:43:53.255061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d-rootfs.mount: Deactivated successfully. Jan 13 20:43:53.841896 containerd[1466]: time="2025-01-13T20:43:53.841809760Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:43:53.915733 containerd[1466]: time="2025-01-13T20:43:53.913871029Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\"" Jan 13 20:43:53.915733 containerd[1466]: time="2025-01-13T20:43:53.914508198Z" level=info msg="StartContainer for \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\"" Jan 13 20:43:53.956643 systemd[1]: Started cri-containerd-f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081.scope - libcontainer container f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081. Jan 13 20:43:53.990070 systemd[1]: cri-containerd-f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081.scope: Deactivated successfully. Jan 13 20:43:53.994063 containerd[1466]: time="2025-01-13T20:43:53.994014141Z" level=info msg="StartContainer for \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\" returns successfully" Jan 13 20:43:54.021112 containerd[1466]: time="2025-01-13T20:43:54.021045143Z" level=info msg="shim disconnected" id=f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081 namespace=k8s.io Jan 13 20:43:54.021112 containerd[1466]: time="2025-01-13T20:43:54.021098334Z" level=warning msg="cleaning up after shim disconnected" id=f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081 namespace=k8s.io Jan 13 20:43:54.021112 containerd[1466]: time="2025-01-13T20:43:54.021108282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:43:54.254685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081-rootfs.mount: Deactivated successfully. Jan 13 20:43:54.849998 containerd[1466]: time="2025-01-13T20:43:54.848735255Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:43:54.907475 containerd[1466]: time="2025-01-13T20:43:54.907210714Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\"" Jan 13 20:43:54.909521 containerd[1466]: time="2025-01-13T20:43:54.908561567Z" level=info msg="StartContainer for \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\"" Jan 13 20:43:54.968479 systemd[1]: Started cri-containerd-afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5.scope - libcontainer container afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5. Jan 13 20:43:54.995460 systemd[1]: cri-containerd-afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5.scope: Deactivated successfully. Jan 13 20:43:55.001936 containerd[1466]: time="2025-01-13T20:43:55.001093823Z" level=info msg="StartContainer for \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\" returns successfully" Jan 13 20:43:55.033055 containerd[1466]: time="2025-01-13T20:43:55.033002063Z" level=info msg="shim disconnected" id=afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5 namespace=k8s.io Jan 13 20:43:55.033339 containerd[1466]: time="2025-01-13T20:43:55.033257378Z" level=warning msg="cleaning up after shim disconnected" id=afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5 namespace=k8s.io Jan 13 20:43:55.033339 containerd[1466]: time="2025-01-13T20:43:55.033274851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:43:55.257247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5-rootfs.mount: Deactivated successfully. Jan 13 20:43:55.498832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886226614.mount: Deactivated successfully. Jan 13 20:43:55.856793 containerd[1466]: time="2025-01-13T20:43:55.856594071Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:43:55.890137 containerd[1466]: time="2025-01-13T20:43:55.890082428Z" level=info msg="CreateContainer within sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\"" Jan 13 20:43:55.892125 containerd[1466]: time="2025-01-13T20:43:55.891956813Z" level=info msg="StartContainer for \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\"" Jan 13 20:43:55.938510 systemd[1]: Started cri-containerd-00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f.scope - libcontainer container 00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f. Jan 13 20:43:55.989990 containerd[1466]: time="2025-01-13T20:43:55.989830058Z" level=info msg="StartContainer for \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\" returns successfully" Jan 13 20:43:56.089354 kubelet[2690]: I0113 20:43:56.085690 2690 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:43:56.116833 kubelet[2690]: I0113 20:43:56.116722 2690 topology_manager.go:215] "Topology Admit Handler" podUID="3cfbaeef-eef0-407b-9aaa-28cbef50556b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tkdmw" Jan 13 20:43:56.123604 kubelet[2690]: I0113 20:43:56.123308 2690 topology_manager.go:215] "Topology Admit Handler" podUID="5ddd26b1-8dca-44fa-99f4-74cead5188a2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qjv2j" Jan 13 20:43:56.126967 systemd[1]: Created slice kubepods-burstable-pod3cfbaeef_eef0_407b_9aaa_28cbef50556b.slice - libcontainer container kubepods-burstable-pod3cfbaeef_eef0_407b_9aaa_28cbef50556b.slice. Jan 13 20:43:56.134904 systemd[1]: Created slice kubepods-burstable-pod5ddd26b1_8dca_44fa_99f4_74cead5188a2.slice - libcontainer container kubepods-burstable-pod5ddd26b1_8dca_44fa_99f4_74cead5188a2.slice. Jan 13 20:43:56.190820 kubelet[2690]: I0113 20:43:56.190789 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xfzq\" (UniqueName: \"kubernetes.io/projected/5ddd26b1-8dca-44fa-99f4-74cead5188a2-kube-api-access-4xfzq\") pod \"coredns-7db6d8ff4d-qjv2j\" (UID: \"5ddd26b1-8dca-44fa-99f4-74cead5188a2\") " pod="kube-system/coredns-7db6d8ff4d-qjv2j" Jan 13 20:43:56.191020 kubelet[2690]: I0113 20:43:56.191004 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cfbaeef-eef0-407b-9aaa-28cbef50556b-config-volume\") pod \"coredns-7db6d8ff4d-tkdmw\" (UID: \"3cfbaeef-eef0-407b-9aaa-28cbef50556b\") " pod="kube-system/coredns-7db6d8ff4d-tkdmw" Jan 13 20:43:56.191184 kubelet[2690]: I0113 20:43:56.191133 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2rgr\" (UniqueName: \"kubernetes.io/projected/3cfbaeef-eef0-407b-9aaa-28cbef50556b-kube-api-access-t2rgr\") pod \"coredns-7db6d8ff4d-tkdmw\" (UID: \"3cfbaeef-eef0-407b-9aaa-28cbef50556b\") " pod="kube-system/coredns-7db6d8ff4d-tkdmw" Jan 13 20:43:56.191273 kubelet[2690]: I0113 20:43:56.191159 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ddd26b1-8dca-44fa-99f4-74cead5188a2-config-volume\") pod \"coredns-7db6d8ff4d-qjv2j\" (UID: \"5ddd26b1-8dca-44fa-99f4-74cead5188a2\") " pod="kube-system/coredns-7db6d8ff4d-qjv2j" Jan 13 20:43:56.431057 containerd[1466]: time="2025-01-13T20:43:56.430938125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tkdmw,Uid:3cfbaeef-eef0-407b-9aaa-28cbef50556b,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:56.440133 containerd[1466]: time="2025-01-13T20:43:56.439891504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qjv2j,Uid:5ddd26b1-8dca-44fa-99f4-74cead5188a2,Namespace:kube-system,Attempt:0,}" Jan 13 20:43:56.880572 kubelet[2690]: I0113 20:43:56.880059 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6vfwt" podStartSLOduration=6.41643475 podStartE2EDuration="17.880039678s" podCreationTimestamp="2025-01-13 20:43:39 +0000 UTC" firstStartedPulling="2025-01-13 20:43:39.768913482 +0000 UTC m=+13.272429285" lastFinishedPulling="2025-01-13 20:43:51.23251841 +0000 UTC m=+24.736034213" observedRunningTime="2025-01-13 20:43:56.877298069 +0000 UTC m=+30.380813862" watchObservedRunningTime="2025-01-13 20:43:56.880039678 +0000 UTC m=+30.383555482" Jan 13 20:44:04.523060 containerd[1466]: time="2025-01-13T20:44:04.522958641Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:04.524853 containerd[1466]: time="2025-01-13T20:44:04.524674442Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906009" Jan 13 20:44:04.527445 containerd[1466]: time="2025-01-13T20:44:04.526379311Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:44:04.527942 containerd[1466]: time="2025-01-13T20:44:04.527911855Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 13.293107255s" Jan 13 20:44:04.527998 containerd[1466]: time="2025-01-13T20:44:04.527942744Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:44:04.530484 containerd[1466]: time="2025-01-13T20:44:04.530458779Z" level=info msg="CreateContainer within sandbox \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:44:04.557973 containerd[1466]: time="2025-01-13T20:44:04.557937477Z" level=info msg="CreateContainer within sandbox \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\"" Jan 13 20:44:04.558723 containerd[1466]: time="2025-01-13T20:44:04.558702616Z" level=info msg="StartContainer for \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\"" Jan 13 20:44:04.595488 systemd[1]: Started cri-containerd-b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a.scope - libcontainer container b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a. Jan 13 20:44:04.629733 containerd[1466]: time="2025-01-13T20:44:04.629692351Z" level=info msg="StartContainer for \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\" returns successfully" Jan 13 20:44:04.910212 kubelet[2690]: I0113 20:44:04.910057 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-6ps6g" podStartSLOduration=1.33015513 podStartE2EDuration="25.910038761s" podCreationTimestamp="2025-01-13 20:43:39 +0000 UTC" firstStartedPulling="2025-01-13 20:43:39.949114342 +0000 UTC m=+13.452630135" lastFinishedPulling="2025-01-13 20:44:04.528997973 +0000 UTC m=+38.032513766" observedRunningTime="2025-01-13 20:44:04.909938511 +0000 UTC m=+38.413454304" watchObservedRunningTime="2025-01-13 20:44:04.910038761 +0000 UTC m=+38.413554564" Jan 13 20:44:08.996065 systemd-networkd[1376]: cilium_host: Link UP Jan 13 20:44:08.996874 systemd-networkd[1376]: cilium_net: Link UP Jan 13 20:44:08.997746 systemd-networkd[1376]: cilium_net: Gained carrier Jan 13 20:44:08.998288 systemd-networkd[1376]: cilium_host: Gained carrier Jan 13 20:44:09.117639 systemd-networkd[1376]: cilium_vxlan: Link UP Jan 13 20:44:09.117646 systemd-networkd[1376]: cilium_vxlan: Gained carrier Jan 13 20:44:09.433465 systemd-networkd[1376]: cilium_host: Gained IPv6LL Jan 13 20:44:09.459201 kernel: NET: Registered PF_ALG protocol family Jan 13 20:44:09.521553 systemd-networkd[1376]: cilium_net: Gained IPv6LL Jan 13 20:44:10.304484 systemd-networkd[1376]: lxc_health: Link UP Jan 13 20:44:10.309405 systemd-networkd[1376]: lxc_health: Gained carrier Jan 13 20:44:10.545919 systemd-networkd[1376]: lxcc2499ad45484: Link UP Jan 13 20:44:10.552383 kernel: eth0: renamed from tmpd207c Jan 13 20:44:10.561623 systemd-networkd[1376]: lxc6254c38ee423: Link UP Jan 13 20:44:10.572381 kernel: eth0: renamed from tmp02045 Jan 13 20:44:10.577941 systemd-networkd[1376]: lxcc2499ad45484: Gained carrier Jan 13 20:44:10.581614 systemd-networkd[1376]: lxc6254c38ee423: Gained carrier Jan 13 20:44:10.673488 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Jan 13 20:44:11.825555 systemd-networkd[1376]: lxc6254c38ee423: Gained IPv6LL Jan 13 20:44:12.081606 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 13 20:44:12.083830 systemd-networkd[1376]: lxcc2499ad45484: Gained IPv6LL Jan 13 20:44:15.107415 containerd[1466]: time="2025-01-13T20:44:15.106366216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:15.107415 containerd[1466]: time="2025-01-13T20:44:15.106426999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:15.107415 containerd[1466]: time="2025-01-13T20:44:15.106446175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.107415 containerd[1466]: time="2025-01-13T20:44:15.106567571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.138483 systemd[1]: Started cri-containerd-d207c146c264a6623d586a4f910680c0380125cdfc5dfe2466f73071ddf11bb0.scope - libcontainer container d207c146c264a6623d586a4f910680c0380125cdfc5dfe2466f73071ddf11bb0. Jan 13 20:44:15.190510 containerd[1466]: time="2025-01-13T20:44:15.189558258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:44:15.190510 containerd[1466]: time="2025-01-13T20:44:15.189621125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:44:15.190510 containerd[1466]: time="2025-01-13T20:44:15.189635011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.190510 containerd[1466]: time="2025-01-13T20:44:15.189717775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:44:15.210906 containerd[1466]: time="2025-01-13T20:44:15.210633450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tkdmw,Uid:3cfbaeef-eef0-407b-9aaa-28cbef50556b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d207c146c264a6623d586a4f910680c0380125cdfc5dfe2466f73071ddf11bb0\"" Jan 13 20:44:15.215128 containerd[1466]: time="2025-01-13T20:44:15.215070260Z" level=info msg="CreateContainer within sandbox \"d207c146c264a6623d586a4f910680c0380125cdfc5dfe2466f73071ddf11bb0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:44:15.227608 systemd[1]: run-containerd-runc-k8s.io-0204527cfd5ef378278c1d76f493eb860e58102d07948fdf26e1c7adb2eca09d-runc.wdimdf.mount: Deactivated successfully. Jan 13 20:44:15.239474 systemd[1]: Started cri-containerd-0204527cfd5ef378278c1d76f493eb860e58102d07948fdf26e1c7adb2eca09d.scope - libcontainer container 0204527cfd5ef378278c1d76f493eb860e58102d07948fdf26e1c7adb2eca09d. Jan 13 20:44:15.256850 containerd[1466]: time="2025-01-13T20:44:15.256776706Z" level=info msg="CreateContainer within sandbox \"d207c146c264a6623d586a4f910680c0380125cdfc5dfe2466f73071ddf11bb0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbddac4dbbd7441caa377ff47e1298c423704568c9535a07bb893ba06ac16414\"" Jan 13 20:44:15.258947 containerd[1466]: time="2025-01-13T20:44:15.258008839Z" level=info msg="StartContainer for \"dbddac4dbbd7441caa377ff47e1298c423704568c9535a07bb893ba06ac16414\"" Jan 13 20:44:15.293479 systemd[1]: Started cri-containerd-dbddac4dbbd7441caa377ff47e1298c423704568c9535a07bb893ba06ac16414.scope - libcontainer container dbddac4dbbd7441caa377ff47e1298c423704568c9535a07bb893ba06ac16414. Jan 13 20:44:15.343602 containerd[1466]: time="2025-01-13T20:44:15.343569560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qjv2j,Uid:5ddd26b1-8dca-44fa-99f4-74cead5188a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0204527cfd5ef378278c1d76f493eb860e58102d07948fdf26e1c7adb2eca09d\"" Jan 13 20:44:15.356865 containerd[1466]: time="2025-01-13T20:44:15.356430802Z" level=info msg="CreateContainer within sandbox \"0204527cfd5ef378278c1d76f493eb860e58102d07948fdf26e1c7adb2eca09d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:44:15.359975 containerd[1466]: time="2025-01-13T20:44:15.359440284Z" level=info msg="StartContainer for \"dbddac4dbbd7441caa377ff47e1298c423704568c9535a07bb893ba06ac16414\" returns successfully" Jan 13 20:44:15.386654 containerd[1466]: time="2025-01-13T20:44:15.386531244Z" level=info msg="CreateContainer within sandbox \"0204527cfd5ef378278c1d76f493eb860e58102d07948fdf26e1c7adb2eca09d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8593b2ef2d3cbb81fac216e6e08a58a8909a1eeaf4dff478c5489904fa7375d\"" Jan 13 20:44:15.389391 containerd[1466]: time="2025-01-13T20:44:15.388394382Z" level=info msg="StartContainer for \"c8593b2ef2d3cbb81fac216e6e08a58a8909a1eeaf4dff478c5489904fa7375d\"" Jan 13 20:44:15.422518 systemd[1]: Started cri-containerd-c8593b2ef2d3cbb81fac216e6e08a58a8909a1eeaf4dff478c5489904fa7375d.scope - libcontainer container c8593b2ef2d3cbb81fac216e6e08a58a8909a1eeaf4dff478c5489904fa7375d. Jan 13 20:44:15.465251 containerd[1466]: time="2025-01-13T20:44:15.465143240Z" level=info msg="StartContainer for \"c8593b2ef2d3cbb81fac216e6e08a58a8909a1eeaf4dff478c5489904fa7375d\" returns successfully" Jan 13 20:44:15.943923 kubelet[2690]: I0113 20:44:15.943806 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qjv2j" podStartSLOduration=36.943773321 podStartE2EDuration="36.943773321s" podCreationTimestamp="2025-01-13 20:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:44:15.94161855 +0000 UTC m=+49.445134403" watchObservedRunningTime="2025-01-13 20:44:15.943773321 +0000 UTC m=+49.447289164" Jan 13 20:44:16.007708 kubelet[2690]: I0113 20:44:16.006515 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tkdmw" podStartSLOduration=37.006478652 podStartE2EDuration="37.006478652s" podCreationTimestamp="2025-01-13 20:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:44:16.004764059 +0000 UTC m=+49.508279922" watchObservedRunningTime="2025-01-13 20:44:16.006478652 +0000 UTC m=+49.509994495" Jan 13 20:44:49.309914 systemd[1]: Started sshd@7-172.24.4.153:22-172.24.4.1:52134.service - OpenSSH per-connection server daemon (172.24.4.1:52134). Jan 13 20:44:50.736009 sshd[4068]: Accepted publickey for core from 172.24.4.1 port 52134 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:44:50.738892 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:50.749994 systemd-logind[1450]: New session 10 of user core. Jan 13 20:44:50.764643 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:44:51.393378 sshd[4070]: Connection closed by 172.24.4.1 port 52134 Jan 13 20:44:51.392282 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:51.398865 systemd[1]: sshd@7-172.24.4.153:22-172.24.4.1:52134.service: Deactivated successfully. Jan 13 20:44:51.404310 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:44:51.408974 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:44:51.411559 systemd-logind[1450]: Removed session 10. Jan 13 20:44:56.417965 systemd[1]: Started sshd@8-172.24.4.153:22-172.24.4.1:45132.service - OpenSSH per-connection server daemon (172.24.4.1:45132). Jan 13 20:44:57.693895 sshd[4083]: Accepted publickey for core from 172.24.4.1 port 45132 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:44:57.696655 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:57.707242 systemd-logind[1450]: New session 11 of user core. Jan 13 20:44:57.716662 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:44:58.473750 sshd[4085]: Connection closed by 172.24.4.1 port 45132 Jan 13 20:44:58.474884 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:58.481925 systemd[1]: sshd@8-172.24.4.153:22-172.24.4.1:45132.service: Deactivated successfully. Jan 13 20:44:58.488217 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:44:58.490809 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:44:58.493684 systemd-logind[1450]: Removed session 11. Jan 13 20:45:03.500060 systemd[1]: Started sshd@9-172.24.4.153:22-172.24.4.1:45140.service - OpenSSH per-connection server daemon (172.24.4.1:45140). Jan 13 20:45:04.815464 sshd[4096]: Accepted publickey for core from 172.24.4.1 port 45140 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:04.819762 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:04.836501 systemd-logind[1450]: New session 12 of user core. Jan 13 20:45:04.849137 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:45:05.535314 sshd[4098]: Connection closed by 172.24.4.1 port 45140 Jan 13 20:45:05.536836 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:05.541965 systemd[1]: sshd@9-172.24.4.153:22-172.24.4.1:45140.service: Deactivated successfully. Jan 13 20:45:05.546136 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:45:05.548081 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:45:05.549681 systemd-logind[1450]: Removed session 12. Jan 13 20:45:10.559906 systemd[1]: Started sshd@10-172.24.4.153:22-172.24.4.1:54916.service - OpenSSH per-connection server daemon (172.24.4.1:54916). Jan 13 20:45:11.737894 sshd[4112]: Accepted publickey for core from 172.24.4.1 port 54916 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:11.739507 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:11.744967 systemd-logind[1450]: New session 13 of user core. Jan 13 20:45:11.748467 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:45:12.383560 sshd[4115]: Connection closed by 172.24.4.1 port 54916 Jan 13 20:45:12.383316 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:12.397128 systemd[1]: sshd@10-172.24.4.153:22-172.24.4.1:54916.service: Deactivated successfully. Jan 13 20:45:12.403068 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:45:12.405450 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:45:12.416053 systemd[1]: Started sshd@11-172.24.4.153:22-172.24.4.1:54922.service - OpenSSH per-connection server daemon (172.24.4.1:54922). Jan 13 20:45:12.420929 systemd-logind[1450]: Removed session 13. Jan 13 20:45:13.825417 sshd[4127]: Accepted publickey for core from 172.24.4.1 port 54922 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:13.828484 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:13.840560 systemd-logind[1450]: New session 14 of user core. Jan 13 20:45:13.845943 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:45:14.667384 sshd[4129]: Connection closed by 172.24.4.1 port 54922 Jan 13 20:45:14.667487 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:14.681945 systemd[1]: sshd@11-172.24.4.153:22-172.24.4.1:54922.service: Deactivated successfully. Jan 13 20:45:14.691700 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:45:14.693905 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:45:14.703953 systemd[1]: Started sshd@12-172.24.4.153:22-172.24.4.1:42498.service - OpenSSH per-connection server daemon (172.24.4.1:42498). Jan 13 20:45:14.708951 systemd-logind[1450]: Removed session 14. Jan 13 20:45:15.943207 sshd[4138]: Accepted publickey for core from 172.24.4.1 port 42498 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:15.946563 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:15.956911 systemd-logind[1450]: New session 15 of user core. Jan 13 20:45:15.971641 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:45:16.650829 sshd[4140]: Connection closed by 172.24.4.1 port 42498 Jan 13 20:45:16.652297 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:16.659293 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:45:16.660017 systemd[1]: sshd@12-172.24.4.153:22-172.24.4.1:42498.service: Deactivated successfully. Jan 13 20:45:16.666127 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:45:16.673738 systemd-logind[1450]: Removed session 15. Jan 13 20:45:21.677994 systemd[1]: Started sshd@13-172.24.4.153:22-172.24.4.1:42502.service - OpenSSH per-connection server daemon (172.24.4.1:42502). Jan 13 20:45:22.957679 sshd[4151]: Accepted publickey for core from 172.24.4.1 port 42502 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:22.960628 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:22.971368 systemd-logind[1450]: New session 16 of user core. Jan 13 20:45:22.985739 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:45:23.738875 sshd[4153]: Connection closed by 172.24.4.1 port 42502 Jan 13 20:45:23.740496 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:23.751951 systemd[1]: sshd@13-172.24.4.153:22-172.24.4.1:42502.service: Deactivated successfully. Jan 13 20:45:23.756819 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:45:23.758258 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:45:23.769662 systemd[1]: Started sshd@14-172.24.4.153:22-172.24.4.1:51216.service - OpenSSH per-connection server daemon (172.24.4.1:51216). Jan 13 20:45:23.772497 systemd-logind[1450]: Removed session 16. Jan 13 20:45:25.243359 sshd[4164]: Accepted publickey for core from 172.24.4.1 port 51216 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:25.246106 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:25.256831 systemd-logind[1450]: New session 17 of user core. Jan 13 20:45:25.265642 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:45:26.002144 sshd[4166]: Connection closed by 172.24.4.1 port 51216 Jan 13 20:45:26.001129 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:26.010416 systemd[1]: sshd@14-172.24.4.153:22-172.24.4.1:51216.service: Deactivated successfully. Jan 13 20:45:26.012465 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:45:26.016409 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:45:26.026932 systemd[1]: Started sshd@15-172.24.4.153:22-172.24.4.1:51224.service - OpenSSH per-connection server daemon (172.24.4.1:51224). Jan 13 20:45:26.031160 systemd-logind[1450]: Removed session 17. Jan 13 20:45:27.350030 sshd[4175]: Accepted publickey for core from 172.24.4.1 port 51224 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:27.353532 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:27.367265 systemd-logind[1450]: New session 18 of user core. Jan 13 20:45:27.378715 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:45:29.961110 sshd[4179]: Connection closed by 172.24.4.1 port 51224 Jan 13 20:45:29.961494 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:29.978855 systemd[1]: sshd@15-172.24.4.153:22-172.24.4.1:51224.service: Deactivated successfully. Jan 13 20:45:29.983160 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:45:29.986203 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:45:29.995964 systemd[1]: Started sshd@16-172.24.4.153:22-172.24.4.1:51230.service - OpenSSH per-connection server daemon (172.24.4.1:51230). Jan 13 20:45:29.999733 systemd-logind[1450]: Removed session 18. Jan 13 20:45:31.410433 sshd[4195]: Accepted publickey for core from 172.24.4.1 port 51230 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:31.414201 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:31.428472 systemd-logind[1450]: New session 19 of user core. Jan 13 20:45:31.434755 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:45:32.772805 sshd[4197]: Connection closed by 172.24.4.1 port 51230 Jan 13 20:45:32.776129 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:32.790042 systemd[1]: sshd@16-172.24.4.153:22-172.24.4.1:51230.service: Deactivated successfully. Jan 13 20:45:32.795903 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:45:32.798679 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:45:32.809199 systemd[1]: Started sshd@17-172.24.4.153:22-172.24.4.1:51242.service - OpenSSH per-connection server daemon (172.24.4.1:51242). Jan 13 20:45:32.814937 systemd-logind[1450]: Removed session 19. Jan 13 20:45:34.068418 sshd[4206]: Accepted publickey for core from 172.24.4.1 port 51242 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:34.071103 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:34.082004 systemd-logind[1450]: New session 20 of user core. Jan 13 20:45:34.092613 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:45:34.896702 sshd[4208]: Connection closed by 172.24.4.1 port 51242 Jan 13 20:45:34.895983 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:34.900320 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:45:34.900388 systemd[1]: sshd@17-172.24.4.153:22-172.24.4.1:51242.service: Deactivated successfully. Jan 13 20:45:34.901958 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:45:34.904900 systemd-logind[1450]: Removed session 20. Jan 13 20:45:39.921012 systemd[1]: Started sshd@18-172.24.4.153:22-172.24.4.1:38018.service - OpenSSH per-connection server daemon (172.24.4.1:38018). Jan 13 20:45:41.246952 sshd[4221]: Accepted publickey for core from 172.24.4.1 port 38018 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:41.250240 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:41.262373 systemd-logind[1450]: New session 21 of user core. Jan 13 20:45:41.270747 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:45:41.986550 sshd[4225]: Connection closed by 172.24.4.1 port 38018 Jan 13 20:45:41.986284 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:41.994048 systemd[1]: sshd@18-172.24.4.153:22-172.24.4.1:38018.service: Deactivated successfully. Jan 13 20:45:42.000067 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:45:42.002609 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:45:42.005374 systemd-logind[1450]: Removed session 21. Jan 13 20:45:47.006955 systemd[1]: Started sshd@19-172.24.4.153:22-172.24.4.1:36948.service - OpenSSH per-connection server daemon (172.24.4.1:36948). Jan 13 20:45:48.537881 sshd[4236]: Accepted publickey for core from 172.24.4.1 port 36948 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:48.540435 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:48.550568 systemd-logind[1450]: New session 22 of user core. Jan 13 20:45:48.559617 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:45:49.377131 sshd[4238]: Connection closed by 172.24.4.1 port 36948 Jan 13 20:45:49.379011 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:49.385924 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:45:49.386374 systemd[1]: sshd@19-172.24.4.153:22-172.24.4.1:36948.service: Deactivated successfully. Jan 13 20:45:49.390175 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:45:49.395412 systemd-logind[1450]: Removed session 22. Jan 13 20:45:54.411065 systemd[1]: Started sshd@20-172.24.4.153:22-172.24.4.1:35636.service - OpenSSH per-connection server daemon (172.24.4.1:35636). Jan 13 20:45:55.833671 sshd[4249]: Accepted publickey for core from 172.24.4.1 port 35636 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:55.836783 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:55.848884 systemd-logind[1450]: New session 23 of user core. Jan 13 20:45:55.854643 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:45:56.518364 sshd[4251]: Connection closed by 172.24.4.1 port 35636 Jan 13 20:45:56.521458 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:56.531082 systemd[1]: sshd@20-172.24.4.153:22-172.24.4.1:35636.service: Deactivated successfully. Jan 13 20:45:56.536537 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:45:56.540108 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:45:56.554188 systemd[1]: Started sshd@21-172.24.4.153:22-172.24.4.1:35640.service - OpenSSH per-connection server daemon (172.24.4.1:35640). Jan 13 20:45:56.558471 systemd-logind[1450]: Removed session 23. Jan 13 20:45:57.966412 sshd[4262]: Accepted publickey for core from 172.24.4.1 port 35640 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:45:57.968771 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:57.984724 systemd-logind[1450]: New session 24 of user core. Jan 13 20:45:58.001747 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:46:00.894468 containerd[1466]: time="2025-01-13T20:46:00.894408866Z" level=info msg="StopContainer for \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\" with timeout 30 (s)" Jan 13 20:46:00.896270 containerd[1466]: time="2025-01-13T20:46:00.896235503Z" level=info msg="Stop container \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\" with signal terminated" Jan 13 20:46:00.913859 systemd[1]: cri-containerd-b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a.scope: Deactivated successfully. Jan 13 20:46:00.943770 containerd[1466]: time="2025-01-13T20:46:00.943714040Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:46:00.944552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a-rootfs.mount: Deactivated successfully. Jan 13 20:46:00.954887 containerd[1466]: time="2025-01-13T20:46:00.954721398Z" level=info msg="StopContainer for \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\" with timeout 2 (s)" Jan 13 20:46:00.957418 containerd[1466]: time="2025-01-13T20:46:00.955122514Z" level=info msg="Stop container \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\" with signal terminated" Jan 13 20:46:00.964561 systemd-networkd[1376]: lxc_health: Link DOWN Jan 13 20:46:00.964570 systemd-networkd[1376]: lxc_health: Lost carrier Jan 13 20:46:00.972285 containerd[1466]: time="2025-01-13T20:46:00.972119263Z" level=info msg="shim disconnected" id=b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a namespace=k8s.io Jan 13 20:46:00.972285 containerd[1466]: time="2025-01-13T20:46:00.972174567Z" level=warning msg="cleaning up after shim disconnected" id=b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a namespace=k8s.io Jan 13 20:46:00.972285 containerd[1466]: time="2025-01-13T20:46:00.972184125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:00.982813 systemd[1]: cri-containerd-00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f.scope: Deactivated successfully. Jan 13 20:46:00.983029 systemd[1]: cri-containerd-00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f.scope: Consumed 8.571s CPU time. Jan 13 20:46:01.009036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f-rootfs.mount: Deactivated successfully. Jan 13 20:46:01.009951 containerd[1466]: time="2025-01-13T20:46:01.009913269Z" level=info msg="StopContainer for \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\" returns successfully" Jan 13 20:46:01.012053 containerd[1466]: time="2025-01-13T20:46:01.012019282Z" level=info msg="StopPodSandbox for \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\"" Jan 13 20:46:01.012399 containerd[1466]: time="2025-01-13T20:46:01.012224831Z" level=info msg="Container to stop \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:46:01.015601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c-shm.mount: Deactivated successfully. Jan 13 20:46:01.022277 systemd[1]: cri-containerd-e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c.scope: Deactivated successfully. Jan 13 20:46:01.033644 containerd[1466]: time="2025-01-13T20:46:01.033421783Z" level=info msg="shim disconnected" id=00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f namespace=k8s.io Jan 13 20:46:01.033644 containerd[1466]: time="2025-01-13T20:46:01.033494951Z" level=warning msg="cleaning up after shim disconnected" id=00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f namespace=k8s.io Jan 13 20:46:01.033644 containerd[1466]: time="2025-01-13T20:46:01.033506483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:01.053468 containerd[1466]: time="2025-01-13T20:46:01.053422910Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:46:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:46:01.053736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c-rootfs.mount: Deactivated successfully. Jan 13 20:46:01.060968 containerd[1466]: time="2025-01-13T20:46:01.060764572Z" level=info msg="shim disconnected" id=e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c namespace=k8s.io Jan 13 20:46:01.060968 containerd[1466]: time="2025-01-13T20:46:01.060821629Z" level=warning msg="cleaning up after shim disconnected" id=e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c namespace=k8s.io Jan 13 20:46:01.060968 containerd[1466]: time="2025-01-13T20:46:01.060832199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:01.067680 containerd[1466]: time="2025-01-13T20:46:01.067300623Z" level=info msg="StopContainer for \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\" returns successfully" Jan 13 20:46:01.068179 containerd[1466]: time="2025-01-13T20:46:01.067903000Z" level=info msg="StopPodSandbox for \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\"" Jan 13 20:46:01.068179 containerd[1466]: time="2025-01-13T20:46:01.067936302Z" level=info msg="Container to stop \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:46:01.068179 containerd[1466]: time="2025-01-13T20:46:01.067970958Z" level=info msg="Container to stop \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:46:01.068179 containerd[1466]: time="2025-01-13T20:46:01.067981658Z" level=info msg="Container to stop \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:46:01.068179 containerd[1466]: time="2025-01-13T20:46:01.067992569Z" level=info msg="Container to stop \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:46:01.068179 containerd[1466]: time="2025-01-13T20:46:01.068004802Z" level=info msg="Container to stop \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:46:01.076833 containerd[1466]: time="2025-01-13T20:46:01.076784898Z" level=info msg="TearDown network for sandbox \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\" successfully" Jan 13 20:46:01.076833 containerd[1466]: time="2025-01-13T20:46:01.076823310Z" level=info msg="StopPodSandbox for \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\" returns successfully" Jan 13 20:46:01.080338 systemd[1]: cri-containerd-c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53.scope: Deactivated successfully. Jan 13 20:46:01.137986 containerd[1466]: time="2025-01-13T20:46:01.137924031Z" level=info msg="shim disconnected" id=c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53 namespace=k8s.io Jan 13 20:46:01.137986 containerd[1466]: time="2025-01-13T20:46:01.137981399Z" level=warning msg="cleaning up after shim disconnected" id=c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53 namespace=k8s.io Jan 13 20:46:01.137986 containerd[1466]: time="2025-01-13T20:46:01.137990777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:01.152635 containerd[1466]: time="2025-01-13T20:46:01.152450889Z" level=info msg="TearDown network for sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" successfully" Jan 13 20:46:01.152635 containerd[1466]: time="2025-01-13T20:46:01.152486577Z" level=info msg="StopPodSandbox for \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" returns successfully" Jan 13 20:46:01.232626 kubelet[2690]: I0113 20:46:01.232514 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f-cilium-config-path\") pod \"8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f\" (UID: \"8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f\") " Jan 13 20:46:01.232626 kubelet[2690]: I0113 20:46:01.232560 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql2vb\" (UniqueName: \"kubernetes.io/projected/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f-kube-api-access-ql2vb\") pod \"8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f\" (UID: \"8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f\") " Jan 13 20:46:01.236882 kubelet[2690]: I0113 20:46:01.236782 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f" (UID: "8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:46:01.237346 kubelet[2690]: I0113 20:46:01.237241 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f-kube-api-access-ql2vb" (OuterVolumeSpecName: "kube-api-access-ql2vb") pod "8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f" (UID: "8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f"). InnerVolumeSpecName "kube-api-access-ql2vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:46:01.296783 kubelet[2690]: I0113 20:46:01.296520 2690 scope.go:117] "RemoveContainer" containerID="b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a" Jan 13 20:46:01.307968 containerd[1466]: time="2025-01-13T20:46:01.307357486Z" level=info msg="RemoveContainer for \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\"" Jan 13 20:46:01.308623 systemd[1]: Removed slice kubepods-besteffort-pod8b0d85ce_5a3d_48e0_9ae8_d4c88a98997f.slice - libcontainer container kubepods-besteffort-pod8b0d85ce_5a3d_48e0_9ae8_d4c88a98997f.slice. Jan 13 20:46:01.333448 kubelet[2690]: I0113 20:46:01.332878 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-bpf-maps\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.333448 kubelet[2690]: I0113 20:46:01.332958 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-xtables-lock\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.333448 kubelet[2690]: I0113 20:46:01.332992 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.333448 kubelet[2690]: I0113 20:46:01.333013 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/374419b6-8645-485c-9a51-3b66501bb499-hubble-tls\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.333448 kubelet[2690]: I0113 20:46:01.333053 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.333448 kubelet[2690]: I0113 20:46:01.333059 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-host-proc-sys-net\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.334041 kubelet[2690]: I0113 20:46:01.333109 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-host-proc-sys-kernel\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.334041 kubelet[2690]: I0113 20:46:01.333149 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-hostproc\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.334041 kubelet[2690]: I0113 20:46:01.333188 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-etc-cni-netd\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.334041 kubelet[2690]: I0113 20:46:01.333226 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cni-path\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.334041 kubelet[2690]: I0113 20:46:01.333265 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-lib-modules\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.334041 kubelet[2690]: I0113 20:46:01.333310 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhnk2\" (UniqueName: \"kubernetes.io/projected/374419b6-8645-485c-9a51-3b66501bb499-kube-api-access-mhnk2\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.335391 kubelet[2690]: I0113 20:46:01.334540 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/374419b6-8645-485c-9a51-3b66501bb499-cilium-config-path\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.335391 kubelet[2690]: I0113 20:46:01.334603 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cilium-run\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.335391 kubelet[2690]: I0113 20:46:01.334644 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cilium-cgroup\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.335391 kubelet[2690]: I0113 20:46:01.334690 2690 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/374419b6-8645-485c-9a51-3b66501bb499-clustermesh-secrets\") pod \"374419b6-8645-485c-9a51-3b66501bb499\" (UID: \"374419b6-8645-485c-9a51-3b66501bb499\") " Jan 13 20:46:01.335391 kubelet[2690]: I0113 20:46:01.334764 2690 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ql2vb\" (UniqueName: \"kubernetes.io/projected/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f-kube-api-access-ql2vb\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.335391 kubelet[2690]: I0113 20:46:01.334791 2690 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-bpf-maps\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.335391 kubelet[2690]: I0113 20:46:01.334817 2690 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-xtables-lock\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.335880 kubelet[2690]: I0113 20:46:01.334843 2690 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f-cilium-config-path\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.339051 kubelet[2690]: I0113 20:46:01.334132 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-hostproc" (OuterVolumeSpecName: "hostproc") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.339051 kubelet[2690]: I0113 20:46:01.334149 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.339051 kubelet[2690]: I0113 20:46:01.334161 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.339051 kubelet[2690]: I0113 20:46:01.334210 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cni-path" (OuterVolumeSpecName: "cni-path") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.339051 kubelet[2690]: I0113 20:46:01.334224 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.340601 kubelet[2690]: I0113 20:46:01.334238 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.340601 kubelet[2690]: I0113 20:46:01.339131 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374419b6-8645-485c-9a51-3b66501bb499-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:46:01.340601 kubelet[2690]: I0113 20:46:01.339164 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.340601 kubelet[2690]: I0113 20:46:01.339183 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:46:01.341064 kubelet[2690]: I0113 20:46:01.341022 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374419b6-8645-485c-9a51-3b66501bb499-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:46:01.344746 containerd[1466]: time="2025-01-13T20:46:01.344656930Z" level=info msg="RemoveContainer for \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\" returns successfully" Jan 13 20:46:01.346692 kubelet[2690]: I0113 20:46:01.346618 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374419b6-8645-485c-9a51-3b66501bb499-kube-api-access-mhnk2" (OuterVolumeSpecName: "kube-api-access-mhnk2") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "kube-api-access-mhnk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:46:01.346832 kubelet[2690]: I0113 20:46:01.346765 2690 scope.go:117] "RemoveContainer" containerID="b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a" Jan 13 20:46:01.348628 kubelet[2690]: I0113 20:46:01.348242 2690 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374419b6-8645-485c-9a51-3b66501bb499-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "374419b6-8645-485c-9a51-3b66501bb499" (UID: "374419b6-8645-485c-9a51-3b66501bb499"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:46:01.348798 containerd[1466]: time="2025-01-13T20:46:01.348310423Z" level=error msg="ContainerStatus for \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\": not found" Jan 13 20:46:01.348875 kubelet[2690]: E0113 20:46:01.348727 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\": not found" containerID="b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a" Jan 13 20:46:01.348875 kubelet[2690]: I0113 20:46:01.348760 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a"} err="failed to get container status \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b020277e70ea9738246475646b29d85de2d6da3cc2215133edbaa6a1414a806a\": not found" Jan 13 20:46:01.348875 kubelet[2690]: I0113 20:46:01.348838 2690 scope.go:117] "RemoveContainer" containerID="00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f" Jan 13 20:46:01.351897 containerd[1466]: time="2025-01-13T20:46:01.351803163Z" level=info msg="RemoveContainer for \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\"" Jan 13 20:46:01.367440 containerd[1466]: time="2025-01-13T20:46:01.367311903Z" level=info msg="RemoveContainer for \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\" returns successfully" Jan 13 20:46:01.367963 kubelet[2690]: I0113 20:46:01.367874 2690 scope.go:117] "RemoveContainer" containerID="afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5" Jan 13 20:46:01.371017 containerd[1466]: time="2025-01-13T20:46:01.370919931Z" level=info msg="RemoveContainer for \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\"" Jan 13 20:46:01.380721 containerd[1466]: time="2025-01-13T20:46:01.380652103Z" level=info msg="RemoveContainer for \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\" returns successfully" Jan 13 20:46:01.381142 kubelet[2690]: I0113 20:46:01.380940 2690 scope.go:117] "RemoveContainer" containerID="f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081" Jan 13 20:46:01.384402 containerd[1466]: time="2025-01-13T20:46:01.383611366Z" level=info msg="RemoveContainer for \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\"" Jan 13 20:46:01.395249 containerd[1466]: time="2025-01-13T20:46:01.395160366Z" level=info msg="RemoveContainer for \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\" returns successfully" Jan 13 20:46:01.396203 kubelet[2690]: I0113 20:46:01.395795 2690 scope.go:117] "RemoveContainer" containerID="21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d" Jan 13 20:46:01.397972 containerd[1466]: time="2025-01-13T20:46:01.397724374Z" level=info msg="RemoveContainer for \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\"" Jan 13 20:46:01.408125 containerd[1466]: time="2025-01-13T20:46:01.407866850Z" level=info msg="RemoveContainer for \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\" returns successfully" Jan 13 20:46:01.409837 kubelet[2690]: I0113 20:46:01.408484 2690 scope.go:117] "RemoveContainer" containerID="f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff" Jan 13 20:46:01.415936 containerd[1466]: time="2025-01-13T20:46:01.414304315Z" level=info msg="RemoveContainer for \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\"" Jan 13 20:46:01.424238 containerd[1466]: time="2025-01-13T20:46:01.424142888Z" level=info msg="RemoveContainer for \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\" returns successfully" Jan 13 20:46:01.424811 kubelet[2690]: I0113 20:46:01.424526 2690 scope.go:117] "RemoveContainer" containerID="00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f" Jan 13 20:46:01.425397 containerd[1466]: time="2025-01-13T20:46:01.424892682Z" level=error msg="ContainerStatus for \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\": not found" Jan 13 20:46:01.425506 kubelet[2690]: E0113 20:46:01.425149 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\": not found" containerID="00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f" Jan 13 20:46:01.425506 kubelet[2690]: I0113 20:46:01.425201 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f"} err="failed to get container status \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\": rpc error: code = NotFound desc = an error occurred when try to find container \"00cfdd4eb215b10edba7e623da537fa3e146d774f78a7232f55d27fde9dd566f\": not found" Jan 13 20:46:01.425506 kubelet[2690]: I0113 20:46:01.425246 2690 scope.go:117] "RemoveContainer" containerID="afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5" Jan 13 20:46:01.426125 containerd[1466]: time="2025-01-13T20:46:01.426029838Z" level=error msg="ContainerStatus for \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\": not found" Jan 13 20:46:01.426355 kubelet[2690]: E0113 20:46:01.426283 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\": not found" containerID="afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5" Jan 13 20:46:01.426516 kubelet[2690]: I0113 20:46:01.426402 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5"} err="failed to get container status \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\": rpc error: code = NotFound desc = an error occurred when try to find container \"afc0e56f6c64fdc8a54d72baaae4b292295cd9177bb70412934f8dae86de1ce5\": not found" Jan 13 20:46:01.426516 kubelet[2690]: I0113 20:46:01.426457 2690 scope.go:117] "RemoveContainer" containerID="f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081" Jan 13 20:46:01.426847 containerd[1466]: time="2025-01-13T20:46:01.426793219Z" level=error msg="ContainerStatus for \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\": not found" Jan 13 20:46:01.427164 kubelet[2690]: E0113 20:46:01.427029 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\": not found" containerID="f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081" Jan 13 20:46:01.427164 kubelet[2690]: I0113 20:46:01.427073 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081"} err="failed to get container status \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7722a0a18c6c73082e4c9cb9846e495bf079f284ea3d732c30b20d2d9b7b081\": not found" Jan 13 20:46:01.427164 kubelet[2690]: I0113 20:46:01.427113 2690 scope.go:117] "RemoveContainer" containerID="21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d" Jan 13 20:46:01.427563 containerd[1466]: time="2025-01-13T20:46:01.427460978Z" level=error msg="ContainerStatus for \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\": not found" Jan 13 20:46:01.428116 kubelet[2690]: E0113 20:46:01.427783 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\": not found" containerID="21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d" Jan 13 20:46:01.428116 kubelet[2690]: I0113 20:46:01.427837 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d"} err="failed to get container status \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"21c4e953e6d5a0f9d67b78254f0c4d48ef05775ae7acaecd7fe9c178d439cc3d\": not found" Jan 13 20:46:01.428116 kubelet[2690]: I0113 20:46:01.427873 2690 scope.go:117] "RemoveContainer" containerID="f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff" Jan 13 20:46:01.428681 kubelet[2690]: E0113 20:46:01.428543 2690 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\": not found" containerID="f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff" Jan 13 20:46:01.428681 kubelet[2690]: I0113 20:46:01.428588 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff"} err="failed to get container status \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\": not found" Jan 13 20:46:01.428847 containerd[1466]: time="2025-01-13T20:46:01.428267740Z" level=error msg="ContainerStatus for \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f85a358718e61f7db768b0b41cc338986b1b6a68a71c64220b2ec8eecf6c67ff\": not found" Jan 13 20:46:01.435736 kubelet[2690]: I0113 20:46:01.435688 2690 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cilium-run\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.435736 kubelet[2690]: I0113 20:46:01.435735 2690 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cilium-cgroup\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.435922 kubelet[2690]: I0113 20:46:01.435761 2690 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/374419b6-8645-485c-9a51-3b66501bb499-clustermesh-secrets\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.435922 kubelet[2690]: I0113 20:46:01.435787 2690 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/374419b6-8645-485c-9a51-3b66501bb499-hubble-tls\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.435922 kubelet[2690]: I0113 20:46:01.435811 2690 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-host-proc-sys-net\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.435922 kubelet[2690]: I0113 20:46:01.435834 2690 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-host-proc-sys-kernel\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.435922 kubelet[2690]: I0113 20:46:01.435857 2690 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-hostproc\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.435922 kubelet[2690]: I0113 20:46:01.435879 2690 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-etc-cni-netd\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.435922 kubelet[2690]: I0113 20:46:01.435901 2690 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-cni-path\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.436399 kubelet[2690]: I0113 20:46:01.435923 2690 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/374419b6-8645-485c-9a51-3b66501bb499-lib-modules\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.436399 kubelet[2690]: I0113 20:46:01.435945 2690 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mhnk2\" (UniqueName: \"kubernetes.io/projected/374419b6-8645-485c-9a51-3b66501bb499-kube-api-access-mhnk2\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.436399 kubelet[2690]: I0113 20:46:01.435969 2690 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/374419b6-8645-485c-9a51-3b66501bb499-cilium-config-path\") on node \"ci-4186-1-0-b-778e6b4119.novalocal\" DevicePath \"\"" Jan 13 20:46:01.629145 systemd[1]: Removed slice kubepods-burstable-pod374419b6_8645_485c_9a51_3b66501bb499.slice - libcontainer container kubepods-burstable-pod374419b6_8645_485c_9a51_3b66501bb499.slice. Jan 13 20:46:01.629400 systemd[1]: kubepods-burstable-pod374419b6_8645_485c_9a51_3b66501bb499.slice: Consumed 8.654s CPU time. Jan 13 20:46:01.819781 kubelet[2690]: E0113 20:46:01.819708 2690 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:46:01.910732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53-rootfs.mount: Deactivated successfully. Jan 13 20:46:01.911187 systemd[1]: var-lib-kubelet-pods-8b0d85ce\x2d5a3d\x2d48e0\x2d9ae8\x2dd4c88a98997f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dql2vb.mount: Deactivated successfully. Jan 13 20:46:01.911614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53-shm.mount: Deactivated successfully. Jan 13 20:46:01.911971 systemd[1]: var-lib-kubelet-pods-374419b6\x2d8645\x2d485c\x2d9a51\x2d3b66501bb499-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmhnk2.mount: Deactivated successfully. Jan 13 20:46:01.912319 systemd[1]: var-lib-kubelet-pods-374419b6\x2d8645\x2d485c\x2d9a51\x2d3b66501bb499-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:46:01.912747 systemd[1]: var-lib-kubelet-pods-374419b6\x2d8645\x2d485c\x2d9a51\x2d3b66501bb499-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:46:02.687784 kubelet[2690]: I0113 20:46:02.687671 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374419b6-8645-485c-9a51-3b66501bb499" path="/var/lib/kubelet/pods/374419b6-8645-485c-9a51-3b66501bb499/volumes" Jan 13 20:46:02.696157 kubelet[2690]: I0113 20:46:02.696056 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f" path="/var/lib/kubelet/pods/8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f/volumes" Jan 13 20:46:03.016467 sshd[4264]: Connection closed by 172.24.4.1 port 35640 Jan 13 20:46:03.019709 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:03.030789 systemd[1]: sshd@21-172.24.4.153:22-172.24.4.1:35640.service: Deactivated successfully. Jan 13 20:46:03.034617 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:46:03.034924 systemd[1]: session-24.scope: Consumed 1.606s CPU time. Jan 13 20:46:03.037276 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:46:03.044987 systemd[1]: Started sshd@22-172.24.4.153:22-172.24.4.1:35654.service - OpenSSH per-connection server daemon (172.24.4.1:35654). Jan 13 20:46:03.049705 systemd-logind[1450]: Removed session 24. Jan 13 20:46:04.585244 sshd[4423]: Accepted publickey for core from 172.24.4.1 port 35654 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:46:04.587583 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:04.599148 systemd-logind[1450]: New session 25 of user core. Jan 13 20:46:04.601584 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:46:05.879402 kubelet[2690]: I0113 20:46:05.878712 2690 topology_manager.go:215] "Topology Admit Handler" podUID="a6f2c9b7-4106-4205-a736-2ccd130d3926" podNamespace="kube-system" podName="cilium-mb9g4" Jan 13 20:46:05.881767 kubelet[2690]: E0113 20:46:05.880647 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="374419b6-8645-485c-9a51-3b66501bb499" containerName="mount-bpf-fs" Jan 13 20:46:05.881767 kubelet[2690]: E0113 20:46:05.880833 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="374419b6-8645-485c-9a51-3b66501bb499" containerName="cilium-agent" Jan 13 20:46:05.881767 kubelet[2690]: E0113 20:46:05.880859 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="374419b6-8645-485c-9a51-3b66501bb499" containerName="mount-cgroup" Jan 13 20:46:05.881767 kubelet[2690]: E0113 20:46:05.880874 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="374419b6-8645-485c-9a51-3b66501bb499" containerName="apply-sysctl-overwrites" Jan 13 20:46:05.881767 kubelet[2690]: E0113 20:46:05.881066 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="374419b6-8645-485c-9a51-3b66501bb499" containerName="clean-cilium-state" Jan 13 20:46:05.881767 kubelet[2690]: E0113 20:46:05.881099 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f" containerName="cilium-operator" Jan 13 20:46:05.881767 kubelet[2690]: I0113 20:46:05.881404 2690 memory_manager.go:354] "RemoveStaleState removing state" podUID="374419b6-8645-485c-9a51-3b66501bb499" containerName="cilium-agent" Jan 13 20:46:05.881767 kubelet[2690]: I0113 20:46:05.881481 2690 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0d85ce-5a3d-48e0-9ae8-d4c88a98997f" containerName="cilium-operator" Jan 13 20:46:05.901814 systemd[1]: Created slice kubepods-burstable-poda6f2c9b7_4106_4205_a736_2ccd130d3926.slice - libcontainer container kubepods-burstable-poda6f2c9b7_4106_4205_a736_2ccd130d3926.slice. Jan 13 20:46:05.969468 kubelet[2690]: I0113 20:46:05.969436 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-cni-path\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.969865 kubelet[2690]: I0113 20:46:05.969637 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6f2c9b7-4106-4205-a736-2ccd130d3926-clustermesh-secrets\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.969865 kubelet[2690]: I0113 20:46:05.969667 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-hostproc\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.969865 kubelet[2690]: I0113 20:46:05.969684 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-cilium-cgroup\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.969865 kubelet[2690]: I0113 20:46:05.969701 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-lib-modules\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.969865 kubelet[2690]: I0113 20:46:05.969730 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-xtables-lock\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.969865 kubelet[2690]: I0113 20:46:05.969753 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-cilium-run\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.970057 kubelet[2690]: I0113 20:46:05.969777 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6f2c9b7-4106-4205-a736-2ccd130d3926-cilium-config-path\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.970057 kubelet[2690]: I0113 20:46:05.969796 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-host-proc-sys-net\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.970057 kubelet[2690]: I0113 20:46:05.969836 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6f2c9b7-4106-4205-a736-2ccd130d3926-cilium-ipsec-secrets\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.970351 kubelet[2690]: I0113 20:46:05.970314 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-bpf-maps\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.970538 kubelet[2690]: I0113 20:46:05.970362 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-etc-cni-netd\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.970538 kubelet[2690]: I0113 20:46:05.970383 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6f2c9b7-4106-4205-a736-2ccd130d3926-host-proc-sys-kernel\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.970538 kubelet[2690]: I0113 20:46:05.970400 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6f2c9b7-4106-4205-a736-2ccd130d3926-hubble-tls\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:05.970538 kubelet[2690]: I0113 20:46:05.970417 2690 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx6kp\" (UniqueName: \"kubernetes.io/projected/a6f2c9b7-4106-4205-a736-2ccd130d3926-kube-api-access-lx6kp\") pod \"cilium-mb9g4\" (UID: \"a6f2c9b7-4106-4205-a736-2ccd130d3926\") " pod="kube-system/cilium-mb9g4" Jan 13 20:46:06.010127 sshd[4425]: Connection closed by 172.24.4.1 port 35654 Jan 13 20:46:06.012367 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:06.020490 systemd[1]: sshd@22-172.24.4.153:22-172.24.4.1:35654.service: Deactivated successfully. Jan 13 20:46:06.023201 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:46:06.024495 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:46:06.031663 systemd[1]: Started sshd@23-172.24.4.153:22-172.24.4.1:50842.service - OpenSSH per-connection server daemon (172.24.4.1:50842). Jan 13 20:46:06.033952 systemd-logind[1450]: Removed session 25. Jan 13 20:46:06.213438 containerd[1466]: time="2025-01-13T20:46:06.212539857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mb9g4,Uid:a6f2c9b7-4106-4205-a736-2ccd130d3926,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:06.325889 containerd[1466]: time="2025-01-13T20:46:06.324390433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:06.325889 containerd[1466]: time="2025-01-13T20:46:06.325474058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:06.325889 containerd[1466]: time="2025-01-13T20:46:06.325538900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:06.325889 containerd[1466]: time="2025-01-13T20:46:06.325726264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:06.373693 systemd[1]: Started cri-containerd-5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f.scope - libcontainer container 5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f. Jan 13 20:46:06.405092 containerd[1466]: time="2025-01-13T20:46:06.404891367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mb9g4,Uid:a6f2c9b7-4106-4205-a736-2ccd130d3926,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\"" Jan 13 20:46:06.408560 containerd[1466]: time="2025-01-13T20:46:06.408533610Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:46:06.423234 containerd[1466]: time="2025-01-13T20:46:06.423168133Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b8c5875402348dd9df4f5b2e61e3681011745aa895a464a17f134ecb446a3051\"" Jan 13 20:46:06.423763 containerd[1466]: time="2025-01-13T20:46:06.423727419Z" level=info msg="StartContainer for \"b8c5875402348dd9df4f5b2e61e3681011745aa895a464a17f134ecb446a3051\"" Jan 13 20:46:06.452497 systemd[1]: Started cri-containerd-b8c5875402348dd9df4f5b2e61e3681011745aa895a464a17f134ecb446a3051.scope - libcontainer container b8c5875402348dd9df4f5b2e61e3681011745aa895a464a17f134ecb446a3051. Jan 13 20:46:06.487900 containerd[1466]: time="2025-01-13T20:46:06.485729667Z" level=info msg="StartContainer for \"b8c5875402348dd9df4f5b2e61e3681011745aa895a464a17f134ecb446a3051\" returns successfully" Jan 13 20:46:06.493791 systemd[1]: cri-containerd-b8c5875402348dd9df4f5b2e61e3681011745aa895a464a17f134ecb446a3051.scope: Deactivated successfully. Jan 13 20:46:06.532080 containerd[1466]: time="2025-01-13T20:46:06.531984585Z" level=info msg="shim disconnected" id=b8c5875402348dd9df4f5b2e61e3681011745aa895a464a17f134ecb446a3051 namespace=k8s.io Jan 13 20:46:06.532080 containerd[1466]: time="2025-01-13T20:46:06.532072390Z" level=warning msg="cleaning up after shim disconnected" id=b8c5875402348dd9df4f5b2e61e3681011745aa895a464a17f134ecb446a3051 namespace=k8s.io Jan 13 20:46:06.532080 containerd[1466]: time="2025-01-13T20:46:06.532083762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:06.544584 containerd[1466]: time="2025-01-13T20:46:06.544532040Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:46:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:46:06.821191 kubelet[2690]: E0113 20:46:06.821056 2690 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:46:07.307087 sshd[4435]: Accepted publickey for core from 172.24.4.1 port 50842 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:46:07.309976 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:07.319602 systemd-logind[1450]: New session 26 of user core. Jan 13 20:46:07.330656 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:46:07.361513 containerd[1466]: time="2025-01-13T20:46:07.361406638Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:46:07.415485 containerd[1466]: time="2025-01-13T20:46:07.415444671Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5\"" Jan 13 20:46:07.417219 containerd[1466]: time="2025-01-13T20:46:07.416152757Z" level=info msg="StartContainer for \"8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5\"" Jan 13 20:46:07.451477 systemd[1]: Started cri-containerd-8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5.scope - libcontainer container 8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5. Jan 13 20:46:07.482604 containerd[1466]: time="2025-01-13T20:46:07.482543349Z" level=info msg="StartContainer for \"8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5\" returns successfully" Jan 13 20:46:07.485551 systemd[1]: cri-containerd-8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5.scope: Deactivated successfully. Jan 13 20:46:07.511718 containerd[1466]: time="2025-01-13T20:46:07.511610795Z" level=info msg="shim disconnected" id=8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5 namespace=k8s.io Jan 13 20:46:07.511718 containerd[1466]: time="2025-01-13T20:46:07.511687670Z" level=warning msg="cleaning up after shim disconnected" id=8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5 namespace=k8s.io Jan 13 20:46:07.512129 containerd[1466]: time="2025-01-13T20:46:07.511697900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:08.030422 sshd[4545]: Connection closed by 172.24.4.1 port 50842 Jan 13 20:46:08.031544 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:08.045812 systemd[1]: sshd@23-172.24.4.153:22-172.24.4.1:50842.service: Deactivated successfully. Jan 13 20:46:08.051453 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:46:08.054417 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:46:08.070985 systemd[1]: Started sshd@24-172.24.4.153:22-172.24.4.1:50846.service - OpenSSH per-connection server daemon (172.24.4.1:50846). Jan 13 20:46:08.074771 systemd-logind[1450]: Removed session 26. Jan 13 20:46:08.088312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fcbbe4942f46faca10158444ee11f4d4d521308ca7161d91921afcf998298a5-rootfs.mount: Deactivated successfully. Jan 13 20:46:08.362801 containerd[1466]: time="2025-01-13T20:46:08.361222304Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:46:08.644787 containerd[1466]: time="2025-01-13T20:46:08.641192372Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882\"" Jan 13 20:46:08.651426 containerd[1466]: time="2025-01-13T20:46:08.649476843Z" level=info msg="StartContainer for \"fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882\"" Jan 13 20:46:08.726507 systemd[1]: Started cri-containerd-fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882.scope - libcontainer container fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882. Jan 13 20:46:08.783668 containerd[1466]: time="2025-01-13T20:46:08.783609761Z" level=info msg="StartContainer for \"fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882\" returns successfully" Jan 13 20:46:08.788087 systemd[1]: cri-containerd-fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882.scope: Deactivated successfully. Jan 13 20:46:08.818022 containerd[1466]: time="2025-01-13T20:46:08.817795425Z" level=info msg="shim disconnected" id=fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882 namespace=k8s.io Jan 13 20:46:08.818022 containerd[1466]: time="2025-01-13T20:46:08.817857783Z" level=warning msg="cleaning up after shim disconnected" id=fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882 namespace=k8s.io Jan 13 20:46:08.818022 containerd[1466]: time="2025-01-13T20:46:08.817868353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:09.086927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd407832309a08a238273232dfbd46607864b07f92f5d06e4197d1007b2e6882-rootfs.mount: Deactivated successfully. Jan 13 20:46:09.375690 containerd[1466]: time="2025-01-13T20:46:09.372978583Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:46:09.408200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027850319.mount: Deactivated successfully. Jan 13 20:46:09.408802 containerd[1466]: time="2025-01-13T20:46:09.408717661Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6\"" Jan 13 20:46:09.412620 containerd[1466]: time="2025-01-13T20:46:09.411512344Z" level=info msg="StartContainer for \"5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6\"" Jan 13 20:46:09.469490 systemd[1]: Started cri-containerd-5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6.scope - libcontainer container 5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6. Jan 13 20:46:09.474197 sshd[4611]: Accepted publickey for core from 172.24.4.1 port 50846 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:46:09.476175 sshd-session[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:09.481851 systemd-logind[1450]: New session 27 of user core. Jan 13 20:46:09.485478 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:46:09.505726 systemd[1]: cri-containerd-5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6.scope: Deactivated successfully. Jan 13 20:46:09.515976 containerd[1466]: time="2025-01-13T20:46:09.515888435Z" level=info msg="StartContainer for \"5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6\" returns successfully" Jan 13 20:46:09.548475 containerd[1466]: time="2025-01-13T20:46:09.548399270Z" level=info msg="shim disconnected" id=5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6 namespace=k8s.io Jan 13 20:46:09.548475 containerd[1466]: time="2025-01-13T20:46:09.548453883Z" level=warning msg="cleaning up after shim disconnected" id=5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6 namespace=k8s.io Jan 13 20:46:09.548475 containerd[1466]: time="2025-01-13T20:46:09.548463762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:10.014191 kubelet[2690]: I0113 20:46:10.012806 2690 setters.go:580] "Node became not ready" node="ci-4186-1-0-b-778e6b4119.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:46:10Z","lastTransitionTime":"2025-01-13T20:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:46:10.084622 systemd[1]: run-containerd-runc-k8s.io-5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6-runc.mUDxQY.mount: Deactivated successfully. Jan 13 20:46:10.084727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ad3ad21064fcf605b2510cdd8e6c87e57efe08f274fa34ad338fad354203ec6-rootfs.mount: Deactivated successfully. Jan 13 20:46:10.381873 containerd[1466]: time="2025-01-13T20:46:10.381301649Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:46:10.502104 containerd[1466]: time="2025-01-13T20:46:10.502016021Z" level=info msg="CreateContainer within sandbox \"5d7b142ee7144dc75b9f5c48518f4eeca46acc389454098753d2bb1c0837720f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9482b9156db7a82678bf091d5042054c9ed13bf650583a29f4ed7b955da8ddea\"" Jan 13 20:46:10.503919 containerd[1466]: time="2025-01-13T20:46:10.503003013Z" level=info msg="StartContainer for \"9482b9156db7a82678bf091d5042054c9ed13bf650583a29f4ed7b955da8ddea\"" Jan 13 20:46:10.568475 systemd[1]: Started cri-containerd-9482b9156db7a82678bf091d5042054c9ed13bf650583a29f4ed7b955da8ddea.scope - libcontainer container 9482b9156db7a82678bf091d5042054c9ed13bf650583a29f4ed7b955da8ddea. Jan 13 20:46:10.604421 containerd[1466]: time="2025-01-13T20:46:10.604374497Z" level=info msg="StartContainer for \"9482b9156db7a82678bf091d5042054c9ed13bf650583a29f4ed7b955da8ddea\" returns successfully" Jan 13 20:46:10.988429 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:46:11.038429 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 13 20:46:14.477707 systemd-networkd[1376]: lxc_health: Link UP Jan 13 20:46:14.488511 systemd-networkd[1376]: lxc_health: Gained carrier Jan 13 20:46:16.287933 kubelet[2690]: I0113 20:46:16.287829 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mb9g4" podStartSLOduration=11.28779402 podStartE2EDuration="11.28779402s" podCreationTimestamp="2025-01-13 20:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:11.421302992 +0000 UTC m=+164.924818795" watchObservedRunningTime="2025-01-13 20:46:16.28779402 +0000 UTC m=+169.791309864" Jan 13 20:46:16.433870 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 13 20:46:16.729377 kubelet[2690]: E0113 20:46:16.728721 2690 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47636->127.0.0.1:42265: write tcp 127.0.0.1:47636->127.0.0.1:42265: write: broken pipe Jan 13 20:46:18.824894 systemd[1]: run-containerd-runc-k8s.io-9482b9156db7a82678bf091d5042054c9ed13bf650583a29f4ed7b955da8ddea-runc.c80NXU.mount: Deactivated successfully. Jan 13 20:46:21.074571 systemd[1]: run-containerd-runc-k8s.io-9482b9156db7a82678bf091d5042054c9ed13bf650583a29f4ed7b955da8ddea-runc.EoZpJJ.mount: Deactivated successfully. Jan 13 20:46:21.383677 sshd[4693]: Connection closed by 172.24.4.1 port 50846 Jan 13 20:46:21.382559 sshd-session[4611]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:21.390519 systemd[1]: sshd@24-172.24.4.153:22-172.24.4.1:50846.service: Deactivated successfully. Jan 13 20:46:21.396113 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:46:21.398524 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:46:21.400950 systemd-logind[1450]: Removed session 27. Jan 13 20:46:26.705005 containerd[1466]: time="2025-01-13T20:46:26.704934101Z" level=info msg="StopPodSandbox for \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\"" Jan 13 20:46:26.706195 containerd[1466]: time="2025-01-13T20:46:26.705051109Z" level=info msg="TearDown network for sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" successfully" Jan 13 20:46:26.706195 containerd[1466]: time="2025-01-13T20:46:26.705069483Z" level=info msg="StopPodSandbox for \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" returns successfully" Jan 13 20:46:26.706515 containerd[1466]: time="2025-01-13T20:46:26.706199110Z" level=info msg="RemovePodSandbox for \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\"" Jan 13 20:46:26.706515 containerd[1466]: time="2025-01-13T20:46:26.706233374Z" level=info msg="Forcibly stopping sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\"" Jan 13 20:46:26.706515 containerd[1466]: time="2025-01-13T20:46:26.706316710Z" level=info msg="TearDown network for sandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" successfully" Jan 13 20:46:26.711108 containerd[1466]: time="2025-01-13T20:46:26.710995878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:26.711108 containerd[1466]: time="2025-01-13T20:46:26.711077901Z" level=info msg="RemovePodSandbox \"c6e18e0985114c4f16e2d6c1f9328123c890f6741596246d5a0807c433461c53\" returns successfully" Jan 13 20:46:26.711823 containerd[1466]: time="2025-01-13T20:46:26.711800149Z" level=info msg="StopPodSandbox for \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\"" Jan 13 20:46:26.712191 containerd[1466]: time="2025-01-13T20:46:26.712108304Z" level=info msg="TearDown network for sandbox \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\" successfully" Jan 13 20:46:26.712191 containerd[1466]: time="2025-01-13T20:46:26.712169698Z" level=info msg="StopPodSandbox for \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\" returns successfully" Jan 13 20:46:26.713268 containerd[1466]: time="2025-01-13T20:46:26.713084044Z" level=info msg="RemovePodSandbox for \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\"" Jan 13 20:46:26.713268 containerd[1466]: time="2025-01-13T20:46:26.713261014Z" level=info msg="Forcibly stopping sandbox \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\"" Jan 13 20:46:26.713489 containerd[1466]: time="2025-01-13T20:46:26.713362493Z" level=info msg="TearDown network for sandbox \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\" successfully" Jan 13 20:46:26.717522 containerd[1466]: time="2025-01-13T20:46:26.717430532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:26.717522 containerd[1466]: time="2025-01-13T20:46:26.717506635Z" level=info msg="RemovePodSandbox \"e1dab326d7762138aa74008cdf28da9dd73d19f3afb3e1f2752a80efb36a329c\" returns successfully"