Jan 29 16:17:42.100795 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:17:42.100826 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:17:42.100838 kernel: BIOS-provided physical RAM map: Jan 29 16:17:42.100847 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:17:42.100855 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:17:42.100867 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:17:42.100877 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 16:17:42.100886 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 16:17:42.100894 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:17:42.100902 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:17:42.100911 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 16:17:42.100919 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:17:42.100928 kernel: NX (Execute Disable) protection: active Jan 29 16:17:42.100937 kernel: APIC: Static calls initialized Jan 29 16:17:42.100949 kernel: SMBIOS 3.0.0 present. Jan 29 16:17:42.100959 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 16:17:42.100968 kernel: Hypervisor detected: KVM Jan 29 16:17:42.100977 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:17:42.100985 kernel: kvm-clock: using sched offset of 4598773673 cycles Jan 29 16:17:42.100997 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:17:42.101006 kernel: tsc: Detected 1996.249 MHz processor Jan 29 16:17:42.101016 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:17:42.101026 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:17:42.101035 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 16:17:42.101045 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:17:42.101054 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:17:42.101064 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 16:17:42.101073 kernel: ACPI: Early table checksum verification disabled Jan 29 16:17:42.101084 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 16:17:42.101094 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:17:42.101103 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:17:42.101112 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:17:42.101122 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 16:17:42.101131 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:17:42.101140 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:17:42.101149 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 16:17:42.101159 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 16:17:42.101170 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 16:17:42.101179 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 16:17:42.101189 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 16:17:42.101202 kernel: No NUMA configuration found Jan 29 16:17:42.101212 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 16:17:42.101221 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Jan 29 16:17:42.101233 kernel: Zone ranges: Jan 29 16:17:42.101243 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:17:42.101253 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 16:17:42.101262 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 16:17:42.101272 kernel: Movable zone start for each node Jan 29 16:17:42.101282 kernel: Early memory node ranges Jan 29 16:17:42.101291 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:17:42.101301 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 16:17:42.101312 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 16:17:42.101349 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 16:17:42.101360 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:17:42.101369 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:17:42.101379 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 16:17:42.101389 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:17:42.101398 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:17:42.101408 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:17:42.101418 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:17:42.101430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:17:42.101440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:17:42.101449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:17:42.101459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:17:42.101469 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:17:42.101478 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:17:42.101488 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:17:42.101498 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 16:17:42.101507 kernel: Booting paravirtualized kernel on KVM Jan 29 16:17:42.101519 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:17:42.101529 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:17:42.101538 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:17:42.101548 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:17:42.101557 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:17:42.101567 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 16:17:42.101578 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:17:42.101588 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:17:42.101600 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:17:42.101610 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:17:42.101620 kernel: Fallback order for Node 0: 0 Jan 29 16:17:42.101630 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 16:17:42.101639 kernel: Policy zone: Normal Jan 29 16:17:42.101649 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:17:42.101658 kernel: software IO TLB: area num 2. Jan 29 16:17:42.101668 kernel: Memory: 3964168K/4193772K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 229344K reserved, 0K cma-reserved) Jan 29 16:17:42.101678 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:17:42.101691 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:17:42.101700 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:17:42.101710 kernel: Dynamic Preempt: voluntary Jan 29 16:17:42.101720 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:17:42.101730 kernel: rcu: RCU event tracing is enabled. Jan 29 16:17:42.101740 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:17:42.101750 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:17:42.101760 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:17:42.101770 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:17:42.101779 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:17:42.101791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:17:42.101801 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:17:42.101811 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:17:42.101820 kernel: Console: colour VGA+ 80x25 Jan 29 16:17:42.101830 kernel: printk: console [tty0] enabled Jan 29 16:17:42.101840 kernel: printk: console [ttyS0] enabled Jan 29 16:17:42.101849 kernel: ACPI: Core revision 20230628 Jan 29 16:17:42.101859 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:17:42.101869 kernel: x2apic enabled Jan 29 16:17:42.101881 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:17:42.101891 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:17:42.101900 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:17:42.101910 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 16:17:42.101921 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 16:17:42.101930 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 16:17:42.101940 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:17:42.101950 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:17:42.101960 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:17:42.101973 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:17:42.101983 kernel: Speculative Store Bypass: Vulnerable Jan 29 16:17:42.101992 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 16:17:42.102002 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:17:42.102021 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:17:42.102033 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:17:42.102043 kernel: landlock: Up and running. Jan 29 16:17:42.102053 kernel: SELinux: Initializing. Jan 29 16:17:42.102064 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:17:42.102074 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:17:42.102084 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 16:17:42.102097 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:17:42.102108 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:17:42.102118 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:17:42.102128 kernel: Performance Events: AMD PMU driver. Jan 29 16:17:42.102138 kernel: ... version: 0 Jan 29 16:17:42.102151 kernel: ... bit width: 48 Jan 29 16:17:42.102161 kernel: ... generic registers: 4 Jan 29 16:17:42.102171 kernel: ... value mask: 0000ffffffffffff Jan 29 16:17:42.102181 kernel: ... max period: 00007fffffffffff Jan 29 16:17:42.102191 kernel: ... fixed-purpose events: 0 Jan 29 16:17:42.102201 kernel: ... event mask: 000000000000000f Jan 29 16:17:42.102212 kernel: signal: max sigframe size: 1440 Jan 29 16:17:42.102222 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:17:42.102232 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:17:42.102245 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:17:42.102255 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:17:42.102265 kernel: .... node #0, CPUs: #1 Jan 29 16:17:42.102275 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:17:42.102285 kernel: smpboot: Max logical packages: 2 Jan 29 16:17:42.102295 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 16:17:42.102305 kernel: devtmpfs: initialized Jan 29 16:17:42.102315 kernel: x86/mm: Memory block size: 128MB Jan 29 16:17:42.102342 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:17:42.102353 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:17:42.102366 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:17:42.102376 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:17:42.102386 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:17:42.102397 kernel: audit: type=2000 audit(1738167460.769:1): state=initialized audit_enabled=0 res=1 Jan 29 16:17:42.102407 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:17:42.102417 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:17:42.102427 kernel: cpuidle: using governor menu Jan 29 16:17:42.102437 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:17:42.102447 kernel: dca service started, version 1.12.1 Jan 29 16:17:42.102461 kernel: PCI: Using configuration type 1 for base access Jan 29 16:17:42.102471 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:17:42.102481 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:17:42.102491 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:17:42.102502 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:17:42.102512 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:17:42.102522 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:17:42.102532 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:17:42.102555 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:17:42.102567 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:17:42.102576 kernel: ACPI: Interpreter enabled Jan 29 16:17:42.102585 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:17:42.102594 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:17:42.102603 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:17:42.102612 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:17:42.102621 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 16:17:42.102631 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:17:42.102777 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:17:42.102879 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 16:17:42.102971 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 16:17:42.102985 kernel: acpiphp: Slot [3] registered Jan 29 16:17:42.102994 kernel: acpiphp: Slot [4] registered Jan 29 16:17:42.103003 kernel: acpiphp: Slot [5] registered Jan 29 16:17:42.103012 kernel: acpiphp: Slot [6] registered Jan 29 16:17:42.103021 kernel: acpiphp: Slot [7] registered Jan 29 16:17:42.103034 kernel: acpiphp: Slot [8] registered Jan 29 16:17:42.103043 kernel: acpiphp: Slot [9] registered Jan 29 16:17:42.103052 kernel: acpiphp: Slot [10] registered Jan 29 16:17:42.103061 kernel: acpiphp: Slot [11] registered Jan 29 16:17:42.103070 kernel: acpiphp: Slot [12] registered Jan 29 16:17:42.103079 kernel: acpiphp: Slot [13] registered Jan 29 16:17:42.103088 kernel: acpiphp: Slot [14] registered Jan 29 16:17:42.103097 kernel: acpiphp: Slot [15] registered Jan 29 16:17:42.103106 kernel: acpiphp: Slot [16] registered Jan 29 16:17:42.103116 kernel: acpiphp: Slot [17] registered Jan 29 16:17:42.103125 kernel: acpiphp: Slot [18] registered Jan 29 16:17:42.103134 kernel: acpiphp: Slot [19] registered Jan 29 16:17:42.103143 kernel: acpiphp: Slot [20] registered Jan 29 16:17:42.103152 kernel: acpiphp: Slot [21] registered Jan 29 16:17:42.103161 kernel: acpiphp: Slot [22] registered Jan 29 16:17:42.103169 kernel: acpiphp: Slot [23] registered Jan 29 16:17:42.103178 kernel: acpiphp: Slot [24] registered Jan 29 16:17:42.103187 kernel: acpiphp: Slot [25] registered Jan 29 16:17:42.103196 kernel: acpiphp: Slot [26] registered Jan 29 16:17:42.103207 kernel: acpiphp: Slot [27] registered Jan 29 16:17:42.103216 kernel: acpiphp: Slot [28] registered Jan 29 16:17:42.103225 kernel: acpiphp: Slot [29] registered Jan 29 16:17:42.103234 kernel: acpiphp: Slot [30] registered Jan 29 16:17:42.103243 kernel: acpiphp: Slot [31] registered Jan 29 16:17:42.103252 kernel: PCI host bridge to bus 0000:00 Jan 29 16:17:42.103368 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:17:42.103458 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:17:42.103548 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:17:42.103633 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:17:42.103716 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 16:17:42.103799 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:17:42.103917 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 16:17:42.104024 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 16:17:42.104134 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 16:17:42.104235 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 16:17:42.104348 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 16:17:42.104446 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 16:17:42.104540 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 16:17:42.104634 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 16:17:42.104739 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 16:17:42.104840 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 16:17:42.104934 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 16:17:42.107337 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 16:17:42.107462 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 16:17:42.107568 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 16:17:42.107675 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 16:17:42.107776 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 16:17:42.107887 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:17:42.108000 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:17:42.108106 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 16:17:42.108208 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 16:17:42.108311 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 16:17:42.108463 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 16:17:42.108579 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:17:42.108714 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:17:42.108822 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 16:17:42.108924 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 16:17:42.109037 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 16:17:42.109142 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 16:17:42.109247 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 16:17:42.109446 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:17:42.109559 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 16:17:42.109659 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 16:17:42.109759 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 16:17:42.109775 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:17:42.109786 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:17:42.109796 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:17:42.109807 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:17:42.109817 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 16:17:42.109831 kernel: iommu: Default domain type: Translated Jan 29 16:17:42.109842 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:17:42.109852 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:17:42.109862 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:17:42.109872 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:17:42.109883 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 16:17:42.109983 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 16:17:42.110083 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 16:17:42.110188 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:17:42.110203 kernel: vgaarb: loaded Jan 29 16:17:42.110214 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:17:42.110224 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:17:42.110234 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:17:42.110245 kernel: pnp: PnP ACPI init Jan 29 16:17:42.112383 kernel: pnp 00:03: [dma 2] Jan 29 16:17:42.112403 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:17:42.112414 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:17:42.112429 kernel: NET: Registered PF_INET protocol family Jan 29 16:17:42.112439 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:17:42.112450 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:17:42.112461 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:17:42.112471 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:17:42.112482 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:17:42.112492 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:17:42.112502 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:17:42.112513 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:17:42.112526 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:17:42.112536 kernel: NET: Registered PF_XDP protocol family Jan 29 16:17:42.112631 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:17:42.112724 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:17:42.112839 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:17:42.112935 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 16:17:42.113025 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 16:17:42.113127 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 16:17:42.113236 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:17:42.113253 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:17:42.113263 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 16:17:42.113274 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 16:17:42.113284 kernel: Initialise system trusted keyrings Jan 29 16:17:42.113295 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:17:42.113305 kernel: Key type asymmetric registered Jan 29 16:17:42.113316 kernel: Asymmetric key parser 'x509' registered Jan 29 16:17:42.113368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:17:42.113379 kernel: io scheduler mq-deadline registered Jan 29 16:17:42.113389 kernel: io scheduler kyber registered Jan 29 16:17:42.113399 kernel: io scheduler bfq registered Jan 29 16:17:42.113410 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:17:42.113421 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 16:17:42.113431 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 16:17:42.113442 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 16:17:42.113452 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 16:17:42.113464 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:17:42.113475 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:17:42.113485 kernel: random: crng init done Jan 29 16:17:42.113496 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:17:42.113506 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:17:42.113516 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:17:42.113622 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 16:17:42.113639 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:17:42.113728 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 16:17:42.113827 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:17:41 UTC (1738167461) Jan 29 16:17:42.113941 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 16:17:42.113962 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:17:42.113973 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:17:42.113983 kernel: Segment Routing with IPv6 Jan 29 16:17:42.113994 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:17:42.114004 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:17:42.114014 kernel: Key type dns_resolver registered Jan 29 16:17:42.114029 kernel: IPI shorthand broadcast: enabled Jan 29 16:17:42.114039 kernel: sched_clock: Marking stable (1049007304, 171172848)->(1265292546, -45112394) Jan 29 16:17:42.114050 kernel: registered taskstats version 1 Jan 29 16:17:42.114060 kernel: Loading compiled-in X.509 certificates Jan 29 16:17:42.114070 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:17:42.114081 kernel: Key type .fscrypt registered Jan 29 16:17:42.114091 kernel: Key type fscrypt-provisioning registered Jan 29 16:17:42.114101 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:17:42.114111 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:17:42.114124 kernel: ima: No architecture policies found Jan 29 16:17:42.114134 kernel: clk: Disabling unused clocks Jan 29 16:17:42.114145 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:17:42.114155 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:17:42.114165 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:17:42.114175 kernel: Run /init as init process Jan 29 16:17:42.114186 kernel: with arguments: Jan 29 16:17:42.114196 kernel: /init Jan 29 16:17:42.114206 kernel: with environment: Jan 29 16:17:42.114218 kernel: HOME=/ Jan 29 16:17:42.114228 kernel: TERM=linux Jan 29 16:17:42.114238 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:17:42.114250 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:17:42.114265 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:17:42.114277 systemd[1]: Detected virtualization kvm. Jan 29 16:17:42.114288 systemd[1]: Detected architecture x86-64. Jan 29 16:17:42.114301 systemd[1]: Running in initrd. Jan 29 16:17:42.114312 systemd[1]: No hostname configured, using default hostname. Jan 29 16:17:42.116363 systemd[1]: Hostname set to . Jan 29 16:17:42.116379 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:17:42.116391 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:17:42.116403 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:17:42.116416 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:17:42.116443 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:17:42.116458 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:17:42.116471 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:17:42.116484 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:17:42.116498 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:17:42.116511 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:17:42.116525 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:17:42.116537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:17:42.116549 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:17:42.116562 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:17:42.116573 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:17:42.116585 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:17:42.116598 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:17:42.116610 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:17:42.116624 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:17:42.116636 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:17:42.116649 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:17:42.116661 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:17:42.116674 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:17:42.116686 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:17:42.116698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:17:42.116711 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:17:42.116723 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:17:42.116737 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:17:42.116749 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:17:42.116762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:17:42.116774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:17:42.116790 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:17:42.116806 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:17:42.116828 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:17:42.116880 systemd-journald[183]: Collecting audit messages is disabled. Jan 29 16:17:42.116919 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:17:42.116933 systemd-journald[183]: Journal started Jan 29 16:17:42.116961 systemd-journald[183]: Runtime Journal (/run/log/journal/ee9a575c35104d6bb6629d8c4fed3d3b) is 8M, max 78.3M, 70.3M free. Jan 29 16:17:42.079669 systemd-modules-load[184]: Inserted module 'overlay' Jan 29 16:17:42.166501 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:17:42.166522 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:17:42.166552 kernel: Bridge firewalling registered Jan 29 16:17:42.166566 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:17:42.126066 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 29 16:17:42.168248 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:17:42.170108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:17:42.188972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:17:42.193657 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:17:42.200662 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:17:42.215521 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:17:42.221681 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:17:42.224490 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:17:42.226849 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:17:42.237520 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:17:42.240900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:17:42.248508 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:17:42.258419 dracut-cmdline[216]: dracut-dracut-053 Jan 29 16:17:42.265085 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:17:42.292707 systemd-resolved[225]: Positive Trust Anchors: Jan 29 16:17:42.292724 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:17:42.292770 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:17:42.300659 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 29 16:17:42.301978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:17:42.303654 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:17:42.346379 kernel: SCSI subsystem initialized Jan 29 16:17:42.358388 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:17:42.371380 kernel: iscsi: registered transport (tcp) Jan 29 16:17:42.395382 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:17:42.395470 kernel: QLogic iSCSI HBA Driver Jan 29 16:17:42.440928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:17:42.448511 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:17:42.484074 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:17:42.484189 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:17:42.484989 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:17:42.547458 kernel: raid6: sse2x4 gen() 4939 MB/s Jan 29 16:17:42.565429 kernel: raid6: sse2x2 gen() 6426 MB/s Jan 29 16:17:42.583722 kernel: raid6: sse2x1 gen() 10163 MB/s Jan 29 16:17:42.583820 kernel: raid6: using algorithm sse2x1 gen() 10163 MB/s Jan 29 16:17:42.602948 kernel: raid6: .... xor() 7141 MB/s, rmw enabled Jan 29 16:17:42.603025 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 16:17:42.626720 kernel: xor: measuring software checksum speed Jan 29 16:17:42.626864 kernel: prefetch64-sse : 18502 MB/sec Jan 29 16:17:42.626922 kernel: generic_sse : 15526 MB/sec Jan 29 16:17:42.628067 kernel: xor: using function: prefetch64-sse (18502 MB/sec) Jan 29 16:17:42.809043 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:17:42.827240 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:17:42.833801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:17:42.850238 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jan 29 16:17:42.855194 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:17:42.870763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:17:42.896279 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 29 16:17:42.938644 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:17:42.948600 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:17:42.996841 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:17:43.008047 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:17:43.049682 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:17:43.058178 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:17:43.059481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:17:43.061062 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:17:43.068838 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:17:43.083543 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:17:43.099371 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 16:17:43.157564 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 16:17:43.159351 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:17:43.159375 kernel: GPT:17805311 != 20971519 Jan 29 16:17:43.159388 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:17:43.159400 kernel: GPT:17805311 != 20971519 Jan 29 16:17:43.159411 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:17:43.159423 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:17:43.159435 kernel: libata version 3.00 loaded. Jan 29 16:17:43.159450 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 16:17:43.166823 kernel: scsi host0: ata_piix Jan 29 16:17:43.166974 kernel: scsi host1: ata_piix Jan 29 16:17:43.167100 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 16:17:43.167115 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 16:17:43.127613 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:17:43.247670 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (451) Jan 29 16:17:43.247699 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (473) Jan 29 16:17:43.127779 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:17:43.128596 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:17:43.129083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:17:43.129212 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:17:43.132050 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:17:43.138653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:17:43.148488 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:17:43.230597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:17:43.248268 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:17:43.249842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:17:43.261749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:17:43.273641 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:17:43.285212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:17:43.297478 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:17:43.300467 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:17:43.317469 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:17:43.321496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:17:43.321529 disk-uuid[506]: Primary Header is updated. Jan 29 16:17:43.321529 disk-uuid[506]: Secondary Entries is updated. Jan 29 16:17:43.321529 disk-uuid[506]: Secondary Header is updated. Jan 29 16:17:44.346404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:17:44.349118 disk-uuid[516]: The operation has completed successfully. Jan 29 16:17:44.434846 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:17:44.434957 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:17:44.486472 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:17:44.495750 sh[527]: Success Jan 29 16:17:44.510408 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 16:17:44.569198 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:17:44.570575 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:17:44.572977 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:17:44.599699 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:17:44.599733 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:17:44.601624 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:17:44.603661 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:17:44.606133 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:17:44.620639 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:17:44.622877 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:17:44.629586 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:17:44.633622 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:17:44.659551 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:17:44.659678 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:17:44.661250 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:17:44.670407 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:17:44.680182 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:17:44.683698 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:17:44.697056 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:17:44.705127 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:17:44.809239 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:17:44.824440 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:17:44.853726 systemd-networkd[713]: lo: Link UP Jan 29 16:17:44.854450 systemd-networkd[713]: lo: Gained carrier Jan 29 16:17:44.856517 systemd-networkd[713]: Enumeration completed Jan 29 16:17:44.857226 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:17:44.857565 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:17:44.857569 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:17:44.860664 systemd-networkd[713]: eth0: Link UP Jan 29 16:17:44.860668 systemd-networkd[713]: eth0: Gained carrier Jan 29 16:17:44.860677 systemd-networkd[713]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:17:44.861483 systemd[1]: Reached target network.target - Network. Jan 29 16:17:44.869853 ignition[633]: Ignition 2.20.0 Jan 29 16:17:44.869864 ignition[633]: Stage: fetch-offline Jan 29 16:17:44.871265 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:17:44.869897 ignition[633]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:17:44.871398 systemd-networkd[713]: eth0: DHCPv4 address 172.24.4.227/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 16:17:44.869907 ignition[633]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:17:44.870014 ignition[633]: parsed url from cmdline: "" Jan 29 16:17:44.870019 ignition[633]: no config URL provided Jan 29 16:17:44.870025 ignition[633]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:17:44.870034 ignition[633]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:17:44.870039 ignition[633]: failed to fetch config: resource requires networking Jan 29 16:17:44.870234 ignition[633]: Ignition finished successfully Jan 29 16:17:44.878868 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:17:44.895548 ignition[722]: Ignition 2.20.0 Jan 29 16:17:44.895565 ignition[722]: Stage: fetch Jan 29 16:17:44.895803 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:17:44.895815 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:17:44.895907 ignition[722]: parsed url from cmdline: "" Jan 29 16:17:44.895911 ignition[722]: no config URL provided Jan 29 16:17:44.895917 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:17:44.895927 ignition[722]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:17:44.896020 ignition[722]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 16:17:44.896112 ignition[722]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 16:17:44.896138 ignition[722]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 16:17:45.090456 ignition[722]: GET result: OK Jan 29 16:17:45.090681 ignition[722]: parsing config with SHA512: aa381a0612878f33c3debf0809d22de360b23e922d53d303f3bd61de2cb1401f2d2cfc1653a80a8d516ee058df1c87c0dffa0f850603ccbe2eb58d3b5e956427 Jan 29 16:17:45.106439 unknown[722]: fetched base config from "system" Jan 29 16:17:45.106465 unknown[722]: fetched base config from "system" Jan 29 16:17:45.109075 ignition[722]: fetch: fetch complete Jan 29 16:17:45.106479 unknown[722]: fetched user config from "openstack" Jan 29 16:17:45.109089 ignition[722]: fetch: fetch passed Jan 29 16:17:45.112717 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:17:45.109184 ignition[722]: Ignition finished successfully Jan 29 16:17:45.122716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:17:45.167759 ignition[728]: Ignition 2.20.0 Jan 29 16:17:45.167788 ignition[728]: Stage: kargs Jan 29 16:17:45.168183 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:17:45.168210 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:17:45.173723 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:17:45.170631 ignition[728]: kargs: kargs passed Jan 29 16:17:45.170744 ignition[728]: Ignition finished successfully Jan 29 16:17:45.184682 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:17:45.212305 ignition[735]: Ignition 2.20.0 Jan 29 16:17:45.212334 ignition[735]: Stage: disks Jan 29 16:17:45.212542 ignition[735]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:17:45.215622 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:17:45.212555 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:17:45.213487 ignition[735]: disks: disks passed Jan 29 16:17:45.217817 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:17:45.213536 ignition[735]: Ignition finished successfully Jan 29 16:17:45.219646 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:17:45.221370 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:17:45.223228 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:17:45.224915 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:17:45.234617 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:17:45.256608 systemd-fsck[744]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:17:45.268187 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:17:45.601486 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:17:45.773348 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:17:45.775198 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:17:45.777309 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:17:45.784483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:17:45.788591 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:17:45.790401 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:17:45.792844 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 16:17:45.796419 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:17:45.796462 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:17:45.804374 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (752) Jan 29 16:17:45.808613 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:17:45.808713 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:17:45.808746 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:17:45.816662 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:17:45.823355 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:17:45.822417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:17:45.826281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:17:45.930696 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:17:45.939285 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:17:45.945237 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:17:45.950799 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:17:46.073568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:17:46.087616 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:17:46.092596 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:17:46.109525 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:17:46.168470 ignition[869]: INFO : Ignition 2.20.0 Jan 29 16:17:46.168681 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:17:46.171496 ignition[869]: INFO : Stage: mount Jan 29 16:17:46.171496 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:17:46.171496 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:17:46.171496 ignition[869]: INFO : mount: mount passed Jan 29 16:17:46.171496 ignition[869]: INFO : Ignition finished successfully Jan 29 16:17:46.173509 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:17:46.599582 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:17:46.660805 systemd-networkd[713]: eth0: Gained IPv6LL Jan 29 16:17:52.993046 coreos-metadata[754]: Jan 29 16:17:52.992 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:17:53.035358 coreos-metadata[754]: Jan 29 16:17:53.035 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 16:17:53.053922 coreos-metadata[754]: Jan 29 16:17:53.053 INFO Fetch successful Jan 29 16:17:53.055522 coreos-metadata[754]: Jan 29 16:17:53.054 INFO wrote hostname ci-4230-0-0-a-7095f58259.novalocal to /sysroot/etc/hostname Jan 29 16:17:53.061505 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 16:17:53.061909 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 16:17:53.071485 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:17:53.087635 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:17:53.106418 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (885) Jan 29 16:17:53.116405 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:17:53.122445 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:17:53.122580 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:17:53.133418 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:17:53.140045 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:17:53.184152 ignition[902]: INFO : Ignition 2.20.0 Jan 29 16:17:53.186157 ignition[902]: INFO : Stage: files Jan 29 16:17:53.186157 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:17:53.186157 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:17:53.191113 ignition[902]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:17:53.191113 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:17:53.191113 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:17:53.197948 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:17:53.198765 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:17:53.199789 unknown[902]: wrote ssh authorized keys file for user: core Jan 29 16:17:53.200496 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:17:53.204532 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:17:53.205580 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:17:53.274023 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:17:53.580270 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:17:53.580270 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:17:53.580270 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:17:54.132494 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:17:54.547603 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:17:54.547603 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:17:54.552879 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 16:17:55.114227 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:17:56.989682 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:17:56.989682 ignition[902]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:17:56.995844 ignition[902]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:17:56.995844 ignition[902]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:17:56.995844 ignition[902]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:17:56.995844 ignition[902]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:17:56.995844 ignition[902]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:17:56.995844 ignition[902]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:17:57.016120 ignition[902]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:17:57.016120 ignition[902]: INFO : files: files passed Jan 29 16:17:57.016120 ignition[902]: INFO : Ignition finished successfully Jan 29 16:17:56.999251 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:17:57.017745 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:17:57.029631 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:17:57.037508 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:17:57.037611 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:17:57.049987 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:17:57.049987 initrd-setup-root-after-ignition[931]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:17:57.054663 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:17:57.054800 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:17:57.055760 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:17:57.070509 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:17:57.111056 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:17:57.111162 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:17:57.113208 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:17:57.115266 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:17:57.117195 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:17:57.129450 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:17:57.145111 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:17:57.154648 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:17:57.168639 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:17:57.170088 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:17:57.170843 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:17:57.172811 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:17:57.172940 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:17:57.175026 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:17:57.176002 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:17:57.177884 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:17:57.179516 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:17:57.181167 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:17:57.186344 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:17:57.188359 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:17:57.190304 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:17:57.192253 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:17:57.194071 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:17:57.195645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:17:57.196308 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:17:57.197813 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:17:57.198793 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:17:57.200623 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:17:57.201366 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:17:57.202393 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:17:57.202593 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:17:57.204815 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:17:57.205008 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:17:57.205844 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:17:57.205958 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:17:57.215710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:17:57.217540 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:17:57.219841 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:17:57.221397 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:17:57.222167 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:17:57.222402 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:17:57.232255 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:17:57.233024 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:17:57.241099 ignition[955]: INFO : Ignition 2.20.0 Jan 29 16:17:57.241099 ignition[955]: INFO : Stage: umount Jan 29 16:17:57.243567 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:17:57.243567 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 16:17:57.243567 ignition[955]: INFO : umount: umount passed Jan 29 16:17:57.243567 ignition[955]: INFO : Ignition finished successfully Jan 29 16:17:57.243616 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:17:57.243718 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:17:57.246075 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:17:57.246146 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:17:57.246714 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:17:57.246758 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:17:57.249472 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:17:57.249517 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:17:57.250301 systemd[1]: Stopped target network.target - Network. Jan 29 16:17:57.251424 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:17:57.251475 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:17:57.252087 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:17:57.254443 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:17:57.258556 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:17:57.259384 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:17:57.259840 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:17:57.260368 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:17:57.260408 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:17:57.260947 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:17:57.260980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:17:57.262104 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:17:57.262152 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:17:57.264632 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:17:57.264720 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:17:57.266387 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:17:57.267742 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:17:57.270552 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:17:57.271220 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:17:57.271312 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:17:57.274638 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:17:57.274732 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:17:57.280349 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:17:57.280630 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:17:57.280729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:17:57.282600 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:17:57.284004 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:17:57.284247 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:17:57.288020 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:17:57.288076 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:17:57.293415 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:17:57.294755 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:17:57.295418 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:17:57.296045 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:17:57.296091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:17:57.297456 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:17:57.297496 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:17:57.298204 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:17:57.298244 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:17:57.299831 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:17:57.302169 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:17:57.302228 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:17:57.308590 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:17:57.308726 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:17:57.310313 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:17:57.310451 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:17:57.311723 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:17:57.311770 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:17:57.312733 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:17:57.312766 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:17:57.313884 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:17:57.313929 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:17:57.315565 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:17:57.315612 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:17:57.316715 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:17:57.316757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:17:57.327505 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:17:57.328643 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:17:57.328702 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:17:57.330682 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:17:57.330728 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:17:57.331785 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:17:57.331828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:17:57.333538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:17:57.333582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:17:57.335860 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:17:57.335916 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:17:57.336253 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:17:57.336350 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:17:57.337737 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:17:57.343709 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:17:57.351009 systemd[1]: Switching root. Jan 29 16:17:57.381714 systemd-journald[183]: Journal stopped Jan 29 16:17:59.128814 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 29 16:17:59.128863 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:17:59.128883 kernel: SELinux: policy capability open_perms=1 Jan 29 16:17:59.128896 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:17:59.128908 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:17:59.128924 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:17:59.128936 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:17:59.128948 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:17:59.128962 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:17:59.128974 kernel: audit: type=1403 audit(1738167478.021:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:17:59.128987 systemd[1]: Successfully loaded SELinux policy in 66.329ms. Jan 29 16:17:59.129007 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.577ms. Jan 29 16:17:59.129022 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:17:59.129240 systemd[1]: Detected virtualization kvm. Jan 29 16:17:59.129259 systemd[1]: Detected architecture x86-64. Jan 29 16:17:59.129273 systemd[1]: Detected first boot. Jan 29 16:17:59.129286 systemd[1]: Hostname set to . Jan 29 16:17:59.129300 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:17:59.129313 zram_generator::config[999]: No configuration found. Jan 29 16:17:59.129357 kernel: Guest personality initialized and is inactive Jan 29 16:17:59.129371 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:17:59.129387 kernel: Initialized host personality Jan 29 16:17:59.129398 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:17:59.129411 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:17:59.129425 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:17:59.129438 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:17:59.129451 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:17:59.129464 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:17:59.129477 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:17:59.129490 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:17:59.129506 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:17:59.129519 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:17:59.129634 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:17:59.129655 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:17:59.129669 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:17:59.129682 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:17:59.129695 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:17:59.129708 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:17:59.129726 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:17:59.129742 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:17:59.129758 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:17:59.129772 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:17:59.129785 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:17:59.129798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:17:59.129811 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:17:59.129827 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:17:59.129840 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:17:59.129853 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:17:59.129866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:17:59.129882 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:17:59.129896 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:17:59.129908 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:17:59.129921 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:17:59.129934 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:17:59.129949 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:17:59.129962 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:17:59.129975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:17:59.130096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:17:59.130115 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:17:59.130130 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:17:59.130143 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:17:59.130155 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:17:59.130168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:17:59.130185 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:17:59.130198 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:17:59.130211 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:17:59.130225 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:17:59.130237 systemd[1]: Reached target machines.target - Containers. Jan 29 16:17:59.130250 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:17:59.130264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:17:59.130277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:17:59.130291 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:17:59.130305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:17:59.130317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:17:59.130356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:17:59.132382 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:17:59.132403 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:17:59.132418 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:17:59.132431 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:17:59.132444 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:17:59.132463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:17:59.132477 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:17:59.132491 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:17:59.132503 kernel: loop: module loaded Jan 29 16:17:59.132517 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:17:59.132530 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:17:59.132543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:17:59.132556 kernel: fuse: init (API version 7.39) Jan 29 16:17:59.132571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:17:59.132584 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:17:59.132596 kernel: ACPI: bus type drm_connector registered Jan 29 16:17:59.132609 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:17:59.132622 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:17:59.132639 systemd[1]: Stopped verity-setup.service. Jan 29 16:17:59.132654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:17:59.132668 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:17:59.132681 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:17:59.132694 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:17:59.132738 systemd-journald[1096]: Collecting audit messages is disabled. Jan 29 16:17:59.132765 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:17:59.132779 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:17:59.132792 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:17:59.132808 systemd-journald[1096]: Journal started Jan 29 16:17:59.132836 systemd-journald[1096]: Runtime Journal (/run/log/journal/ee9a575c35104d6bb6629d8c4fed3d3b) is 8M, max 78.3M, 70.3M free. Jan 29 16:17:58.772625 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:17:58.780633 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:17:58.781062 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:17:59.137636 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:17:59.137649 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:17:59.138391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:17:59.139606 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:17:59.139754 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:17:59.140568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:17:59.140704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:17:59.142779 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:17:59.142928 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:17:59.143635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:17:59.143771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:17:59.144547 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:17:59.144684 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:17:59.146004 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:17:59.146150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:17:59.146922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:17:59.147881 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:17:59.148625 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:17:59.149441 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:17:59.159104 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:17:59.166393 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:17:59.177410 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:17:59.177993 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:17:59.178026 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:17:59.180304 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:17:59.185076 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:17:59.192807 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:17:59.193715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:17:59.195532 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:17:59.200670 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:17:59.201378 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:17:59.204435 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:17:59.209054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:17:59.211555 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:17:59.216540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:17:59.219576 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:17:59.224302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:17:59.225163 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:17:59.225896 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:17:59.226823 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:17:59.234161 systemd-journald[1096]: Time spent on flushing to /var/log/journal/ee9a575c35104d6bb6629d8c4fed3d3b is 50.794ms for 966 entries. Jan 29 16:17:59.234161 systemd-journald[1096]: System Journal (/var/log/journal/ee9a575c35104d6bb6629d8c4fed3d3b) is 8M, max 584.8M, 576.8M free. Jan 29 16:17:59.358923 systemd-journald[1096]: Received client request to flush runtime journal. Jan 29 16:17:59.358978 kernel: loop0: detected capacity change from 0 to 147912 Jan 29 16:17:59.240526 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:17:59.263094 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:17:59.265293 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:17:59.271734 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:17:59.278860 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:17:59.294256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:17:59.351000 systemd-tmpfiles[1139]: ACLs are not supported, ignoring. Jan 29 16:17:59.351014 systemd-tmpfiles[1139]: ACLs are not supported, ignoring. Jan 29 16:17:59.357076 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:17:59.366512 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:17:59.367607 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:17:59.375044 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:17:59.417591 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:17:59.447957 kernel: loop1: detected capacity change from 0 to 8 Jan 29 16:17:59.457375 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:17:59.468523 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:17:59.469370 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 16:17:59.491998 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Jan 29 16:17:59.492436 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Jan 29 16:17:59.497335 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:17:59.538380 kernel: loop3: detected capacity change from 0 to 138176 Jan 29 16:17:59.616344 kernel: loop4: detected capacity change from 0 to 147912 Jan 29 16:17:59.699368 kernel: loop5: detected capacity change from 0 to 8 Jan 29 16:17:59.704352 kernel: loop6: detected capacity change from 0 to 210664 Jan 29 16:17:59.780357 kernel: loop7: detected capacity change from 0 to 138176 Jan 29 16:17:59.783924 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:17:59.848975 (sd-merge)[1168]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 16:17:59.849747 (sd-merge)[1168]: Merged extensions into '/usr'. Jan 29 16:17:59.862528 systemd[1]: Reload requested from client PID 1138 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:17:59.862687 systemd[1]: Reloading... Jan 29 16:17:59.969246 zram_generator::config[1193]: No configuration found. Jan 29 16:18:00.143532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:18:00.231859 systemd[1]: Reloading finished in 368 ms. Jan 29 16:18:00.250385 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:18:00.251369 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:18:00.260417 systemd[1]: Starting ensure-sysext.service... Jan 29 16:18:00.262669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:18:00.275688 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:18:00.293812 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:18:00.294083 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:18:00.294379 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:18:00.294399 systemd[1]: Reloading... Jan 29 16:18:00.294963 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:18:00.295241 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 29 16:18:00.295302 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 29 16:18:00.303611 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:18:00.303849 systemd-tmpfiles[1253]: Skipping /boot Jan 29 16:18:00.319798 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:18:00.319922 systemd-tmpfiles[1253]: Skipping /boot Jan 29 16:18:00.340373 systemd-udevd[1254]: Using default interface naming scheme 'v255'. Jan 29 16:18:00.385970 zram_generator::config[1281]: No configuration found. Jan 29 16:18:00.457902 ldconfig[1133]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:18:00.515387 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1307) Jan 29 16:18:00.603356 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:18:00.614498 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:18:00.614578 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 16:18:00.637491 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:18:00.656374 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 16:18:00.707412 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:18:00.726571 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 16:18:00.726628 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 16:18:00.730889 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:18:00.732610 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:18:00.732646 kernel: [drm] features: -context_init Jan 29 16:18:00.734349 kernel: [drm] number of scanouts: 1 Jan 29 16:18:00.737351 kernel: [drm] number of cap sets: 0 Jan 29 16:18:00.739382 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 16:18:00.748180 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 16:18:00.748269 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:18:00.756360 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:18:00.765037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:18:00.765876 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:18:00.766153 systemd[1]: Reloading finished in 471 ms. Jan 29 16:18:00.774483 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:18:00.775024 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:18:00.785240 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:18:00.819131 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:18:00.824758 systemd[1]: Finished ensure-sysext.service. Jan 29 16:18:00.845286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:18:00.850505 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:18:00.859509 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:18:00.859755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:18:00.861401 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:18:00.864494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:18:00.867186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:18:00.873493 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:18:00.876430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:18:00.878312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:18:00.880260 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:18:00.881426 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:18:00.884456 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:18:00.893574 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:18:00.898624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:18:00.910564 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:18:00.917569 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:18:00.919966 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:18:00.921482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:18:00.923206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:18:00.946562 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:18:00.946941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:18:00.947318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:18:00.951051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:18:00.951884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:18:00.954750 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:18:00.954926 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:18:00.959751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:18:00.959927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:18:00.968932 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:18:00.978676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:18:00.978860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:18:00.987399 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:18:00.992837 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:18:00.994131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:18:01.008512 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:18:01.015161 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:18:01.019575 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:18:01.030539 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:18:01.048414 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:18:01.058440 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:18:01.068920 augenrules[1423]: No rules Jan 29 16:18:01.071023 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:18:01.071249 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:18:01.073226 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:18:01.093066 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:18:01.098720 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:18:01.141671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:18:01.167386 systemd-resolved[1385]: Positive Trust Anchors: Jan 29 16:18:01.167401 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:18:01.167449 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:18:01.172639 systemd-resolved[1385]: Using system hostname 'ci-4230-0-0-a-7095f58259.novalocal'. Jan 29 16:18:01.173960 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:18:01.174708 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:18:01.198858 systemd-networkd[1384]: lo: Link UP Jan 29 16:18:01.198868 systemd-networkd[1384]: lo: Gained carrier Jan 29 16:18:01.200193 systemd-networkd[1384]: Enumeration completed Jan 29 16:18:01.200292 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:18:01.201701 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:18:01.201774 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:18:01.203077 systemd[1]: Reached target network.target - Network. Jan 29 16:18:01.203765 systemd-networkd[1384]: eth0: Link UP Jan 29 16:18:01.203839 systemd-networkd[1384]: eth0: Gained carrier Jan 29 16:18:01.203899 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:18:01.211571 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:18:01.219403 systemd-networkd[1384]: eth0: DHCPv4 address 172.24.4.227/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 16:18:01.220104 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jan 29 16:18:01.221468 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:18:01.222228 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:18:01.223035 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:18:01.226303 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:18:01.227017 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:18:01.231042 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:18:01.231695 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:18:01.231729 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:18:01.232208 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:18:01.232906 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:18:01.233533 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:18:01.234032 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:18:01.238442 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:18:01.241030 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:18:01.246294 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:18:01.248969 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:18:01.252235 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:18:01.257834 systemd-timesyncd[1386]: Contacted time server 95.179.212.126:123 (0.flatcar.pool.ntp.org). Jan 29 16:18:01.257971 systemd-timesyncd[1386]: Initial clock synchronization to Wed 2025-01-29 16:18:01.492717 UTC. Jan 29 16:18:01.260877 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:18:01.263999 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:18:01.269429 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:18:01.270980 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:18:01.272961 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:18:01.273911 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:18:01.274971 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:18:01.275091 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:18:01.280501 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:18:01.288146 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:18:01.294753 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:18:01.305429 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:18:01.311185 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:18:01.317190 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:18:01.324388 jq[1454]: false Jan 29 16:18:01.324518 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:18:01.331281 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:18:01.341540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:18:01.346223 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:18:01.354278 extend-filesystems[1455]: Found loop4 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found loop5 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found loop6 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found loop7 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found vda Jan 29 16:18:01.367467 extend-filesystems[1455]: Found vda1 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found vda2 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found vda3 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found usr Jan 29 16:18:01.367467 extend-filesystems[1455]: Found vda4 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found vda6 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found vda7 Jan 29 16:18:01.367467 extend-filesystems[1455]: Found vda9 Jan 29 16:18:01.367467 extend-filesystems[1455]: Checking size of /dev/vda9 Jan 29 16:18:01.428918 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 16:18:01.362221 dbus-daemon[1451]: [system] SELinux support is enabled Jan 29 16:18:01.362633 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:18:01.429370 extend-filesystems[1455]: Resized partition /dev/vda9 Jan 29 16:18:01.625549 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 16:18:01.625641 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1308) Jan 29 16:18:01.369781 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:18:01.626091 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:18:01.373674 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:18:01.375488 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:18:01.641537 update_engine[1470]: I20250129 16:18:01.426921 1470 main.cc:92] Flatcar Update Engine starting Jan 29 16:18:01.641537 update_engine[1470]: I20250129 16:18:01.447544 1470 update_check_scheduler.cc:74] Next update check in 10m9s Jan 29 16:18:01.390885 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:18:01.642280 jq[1471]: true Jan 29 16:18:01.411313 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:18:01.657302 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:18:01.657302 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:18:01.657302 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 16:18:01.436766 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:18:01.677643 jq[1481]: true Jan 29 16:18:01.680493 extend-filesystems[1455]: Resized filesystem in /dev/vda9 Jan 29 16:18:01.436982 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:18:01.685656 tar[1479]: linux-amd64/helm Jan 29 16:18:01.437282 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:18:01.437487 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:18:01.460788 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:18:01.460983 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:18:01.495689 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:18:01.505694 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:18:01.507766 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:18:01.507791 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:18:01.508413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:18:01.508431 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:18:01.519761 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:18:01.653288 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:18:01.654898 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:18:01.702229 systemd-logind[1468]: New seat seat0. Jan 29 16:18:01.704798 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:18:01.704818 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:18:01.704981 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:18:01.763716 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:18:01.947815 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:18:01.945844 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:18:01.961587 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:18:01.966660 systemd[1]: Starting sshkeys.service... Jan 29 16:18:01.989757 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:18:02.007829 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:18:02.027852 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:18:02.047006 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:18:02.053586 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:18:02.054478 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:18:02.072828 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:18:02.092151 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:18:02.107817 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:18:02.118777 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:18:02.131018 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:18:02.274140 containerd[1485]: time="2025-01-29T16:18:02.274019009Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:18:02.321630 containerd[1485]: time="2025-01-29T16:18:02.321572765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:18:02.326607 containerd[1485]: time="2025-01-29T16:18:02.326558265Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:18:02.326745 containerd[1485]: time="2025-01-29T16:18:02.326728090Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:18:02.326814 containerd[1485]: time="2025-01-29T16:18:02.326799972Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:18:02.327028 containerd[1485]: time="2025-01-29T16:18:02.327009956Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:18:02.327410 containerd[1485]: time="2025-01-29T16:18:02.327391477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:18:02.327871 containerd[1485]: time="2025-01-29T16:18:02.327521318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:18:02.330417 containerd[1485]: time="2025-01-29T16:18:02.329593755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:18:02.330417 containerd[1485]: time="2025-01-29T16:18:02.329893875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:18:02.330417 containerd[1485]: time="2025-01-29T16:18:02.329913614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:18:02.330417 containerd[1485]: time="2025-01-29T16:18:02.329928846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:18:02.330417 containerd[1485]: time="2025-01-29T16:18:02.329940820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:18:02.330417 containerd[1485]: time="2025-01-29T16:18:02.330024417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:18:02.330417 containerd[1485]: time="2025-01-29T16:18:02.330239734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:18:02.330717 containerd[1485]: time="2025-01-29T16:18:02.330696735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:18:02.331683 containerd[1485]: time="2025-01-29T16:18:02.331385564Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:18:02.331683 containerd[1485]: time="2025-01-29T16:18:02.331495037Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:18:02.331683 containerd[1485]: time="2025-01-29T16:18:02.331547747Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:18:02.340464 containerd[1485]: time="2025-01-29T16:18:02.340423864Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:18:02.340657 containerd[1485]: time="2025-01-29T16:18:02.340639572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:18:02.340779 containerd[1485]: time="2025-01-29T16:18:02.340762947Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:18:02.340851 containerd[1485]: time="2025-01-29T16:18:02.340836716Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:18:02.340917 containerd[1485]: time="2025-01-29T16:18:02.340902575Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:18:02.341130 containerd[1485]: time="2025-01-29T16:18:02.341111238Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:18:02.341570 containerd[1485]: time="2025-01-29T16:18:02.341551173Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:18:02.341725 containerd[1485]: time="2025-01-29T16:18:02.341707488Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:18:02.341842 containerd[1485]: time="2025-01-29T16:18:02.341826088Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:18:02.341905 containerd[1485]: time="2025-01-29T16:18:02.341892307Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:18:02.341966 containerd[1485]: time="2025-01-29T16:18:02.341952948Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:18:02.342024 containerd[1485]: time="2025-01-29T16:18:02.342011681Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:18:02.342082 containerd[1485]: time="2025-01-29T16:18:02.342069300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:18:02.342145 containerd[1485]: time="2025-01-29T16:18:02.342131107Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:18:02.342218 containerd[1485]: time="2025-01-29T16:18:02.342203390Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:18:02.342278 containerd[1485]: time="2025-01-29T16:18:02.342265279Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:18:02.342336 containerd[1485]: time="2025-01-29T16:18:02.342323197Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:18:02.342425 containerd[1485]: time="2025-01-29T16:18:02.342410497Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:18:02.342491 containerd[1485]: time="2025-01-29T16:18:02.342478223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342553 containerd[1485]: time="2025-01-29T16:18:02.342540080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342611447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342633568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342647893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342662465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342676192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342691177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342705265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342722869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342736039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342748796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342762419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342777548Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342799463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342813995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.342864 containerd[1485]: time="2025-01-29T16:18:02.342827329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:18:02.343341 containerd[1485]: time="2025-01-29T16:18:02.343221319Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:18:02.343341 containerd[1485]: time="2025-01-29T16:18:02.343251381Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:18:02.344406 containerd[1485]: time="2025-01-29T16:18:02.343427074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:18:02.344406 containerd[1485]: time="2025-01-29T16:18:02.343452630Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:18:02.344406 containerd[1485]: time="2025-01-29T16:18:02.343466440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.344406 containerd[1485]: time="2025-01-29T16:18:02.343486972Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:18:02.344406 containerd[1485]: time="2025-01-29T16:18:02.343499647Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:18:02.344406 containerd[1485]: time="2025-01-29T16:18:02.343511301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:18:02.344567 containerd[1485]: time="2025-01-29T16:18:02.343837131Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:18:02.344567 containerd[1485]: time="2025-01-29T16:18:02.343897463Z" level=info msg="Connect containerd service" Jan 29 16:18:02.344567 containerd[1485]: time="2025-01-29T16:18:02.343922966Z" level=info msg="using legacy CRI server" Jan 29 16:18:02.344567 containerd[1485]: time="2025-01-29T16:18:02.343930299Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:18:02.344567 containerd[1485]: time="2025-01-29T16:18:02.344038607Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:18:02.345177 containerd[1485]: time="2025-01-29T16:18:02.345154705Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:18:02.345508 containerd[1485]: time="2025-01-29T16:18:02.345490828Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:18:02.345682 containerd[1485]: time="2025-01-29T16:18:02.345572806Z" level=info msg="Start subscribing containerd event" Jan 29 16:18:02.345770 containerd[1485]: time="2025-01-29T16:18:02.345756523Z" level=info msg="Start recovering state" Jan 29 16:18:02.345877 containerd[1485]: time="2025-01-29T16:18:02.345861870Z" level=info msg="Start event monitor" Jan 29 16:18:02.346009 containerd[1485]: time="2025-01-29T16:18:02.345994919Z" level=info msg="Start snapshots syncer" Jan 29 16:18:02.346074 containerd[1485]: time="2025-01-29T16:18:02.346061273Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:18:02.346124 containerd[1485]: time="2025-01-29T16:18:02.346113179Z" level=info msg="Start streaming server" Jan 29 16:18:02.346251 containerd[1485]: time="2025-01-29T16:18:02.345926069Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:18:02.346512 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:18:02.350830 containerd[1485]: time="2025-01-29T16:18:02.350497717Z" level=info msg="containerd successfully booted in 0.077425s" Jan 29 16:18:02.388116 tar[1479]: linux-amd64/LICENSE Jan 29 16:18:02.388539 tar[1479]: linux-amd64/README.md Jan 29 16:18:02.399003 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:18:02.917937 systemd-networkd[1384]: eth0: Gained IPv6LL Jan 29 16:18:02.923551 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:18:02.928715 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:18:02.941603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:02.959029 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:18:03.014840 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:18:04.975668 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:18:04.992269 systemd[1]: Started sshd@0-172.24.4.227:22-172.24.4.1:47388.service - OpenSSH per-connection server daemon (172.24.4.1:47388). Jan 29 16:18:05.724672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:05.735061 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:18:06.235438 sshd[1562]: Accepted publickey for core from 172.24.4.1 port 47388 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:18:06.237556 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:18:06.257436 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:18:06.269534 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:18:06.286819 systemd-logind[1468]: New session 1 of user core. Jan 29 16:18:06.317510 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:18:06.333926 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:18:06.355081 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:18:06.362785 systemd-logind[1468]: New session c1 of user core. Jan 29 16:18:06.537185 systemd[1577]: Queued start job for default target default.target. Jan 29 16:18:06.543795 systemd[1577]: Created slice app.slice - User Application Slice. Jan 29 16:18:06.543821 systemd[1577]: Reached target paths.target - Paths. Jan 29 16:18:06.543859 systemd[1577]: Reached target timers.target - Timers. Jan 29 16:18:06.546170 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:18:06.573959 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:18:06.574501 systemd[1577]: Reached target sockets.target - Sockets. Jan 29 16:18:06.574607 systemd[1577]: Reached target basic.target - Basic System. Jan 29 16:18:06.574682 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:18:06.575887 systemd[1577]: Reached target default.target - Main User Target. Jan 29 16:18:06.575926 systemd[1577]: Startup finished in 206ms. Jan 29 16:18:06.583539 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:18:07.018263 systemd[1]: Started sshd@1-172.24.4.227:22-172.24.4.1:47396.service - OpenSSH per-connection server daemon (172.24.4.1:47396). Jan 29 16:18:07.282467 login[1534]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:18:07.291627 login[1535]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:18:07.296482 systemd-logind[1468]: New session 3 of user core. Jan 29 16:18:07.305820 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:18:07.314037 systemd-logind[1468]: New session 2 of user core. Jan 29 16:18:07.321781 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:18:07.659612 kubelet[1570]: E0129 16:18:07.659387 1570 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:18:07.663138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:18:07.663527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:18:07.664335 systemd[1]: kubelet.service: Consumed 2.235s CPU time, 246.4M memory peak. Jan 29 16:18:08.376063 coreos-metadata[1450]: Jan 29 16:18:08.375 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:18:08.446831 coreos-metadata[1450]: Jan 29 16:18:08.446 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 16:18:08.645319 coreos-metadata[1450]: Jan 29 16:18:08.645 INFO Fetch successful Jan 29 16:18:08.645784 coreos-metadata[1450]: Jan 29 16:18:08.645 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 16:18:08.660503 coreos-metadata[1450]: Jan 29 16:18:08.660 INFO Fetch successful Jan 29 16:18:08.660793 coreos-metadata[1450]: Jan 29 16:18:08.660 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 16:18:08.678591 coreos-metadata[1450]: Jan 29 16:18:08.678 INFO Fetch successful Jan 29 16:18:08.678846 coreos-metadata[1450]: Jan 29 16:18:08.678 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 16:18:08.693079 coreos-metadata[1450]: Jan 29 16:18:08.692 INFO Fetch successful Jan 29 16:18:08.693079 coreos-metadata[1450]: Jan 29 16:18:08.693 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 16:18:08.707789 coreos-metadata[1450]: Jan 29 16:18:08.707 INFO Fetch successful Jan 29 16:18:08.707789 coreos-metadata[1450]: Jan 29 16:18:08.707 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 16:18:08.721950 coreos-metadata[1450]: Jan 29 16:18:08.721 INFO Fetch successful Jan 29 16:18:08.776303 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:18:08.777830 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:18:08.985532 sshd[1589]: Accepted publickey for core from 172.24.4.1 port 47396 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:18:08.988302 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:18:08.998799 systemd-logind[1468]: New session 4 of user core. Jan 29 16:18:09.010905 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:18:09.106029 coreos-metadata[1522]: Jan 29 16:18:09.105 WARN failed to locate config-drive, using the metadata service API instead Jan 29 16:18:09.147971 coreos-metadata[1522]: Jan 29 16:18:09.147 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 16:18:09.165039 coreos-metadata[1522]: Jan 29 16:18:09.164 INFO Fetch successful Jan 29 16:18:09.165039 coreos-metadata[1522]: Jan 29 16:18:09.165 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 16:18:09.180777 coreos-metadata[1522]: Jan 29 16:18:09.180 INFO Fetch successful Jan 29 16:18:09.186671 unknown[1522]: wrote ssh authorized keys file for user: core Jan 29 16:18:09.223776 update-ssh-keys[1630]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:18:09.225604 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:18:09.228679 systemd[1]: Finished sshkeys.service. Jan 29 16:18:09.236259 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:18:09.237024 systemd[1]: Startup finished in 1.278s (kernel) + 16.203s (initrd) + 11.281s (userspace) = 28.762s. Jan 29 16:18:09.782699 sshd[1626]: Connection closed by 172.24.4.1 port 47396 Jan 29 16:18:09.780599 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jan 29 16:18:09.800912 systemd[1]: sshd@1-172.24.4.227:22-172.24.4.1:47396.service: Deactivated successfully. Jan 29 16:18:09.805176 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:18:09.807219 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:18:09.814936 systemd[1]: Started sshd@2-172.24.4.227:22-172.24.4.1:47412.service - OpenSSH per-connection server daemon (172.24.4.1:47412). Jan 29 16:18:09.818134 systemd-logind[1468]: Removed session 4. Jan 29 16:18:11.312706 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 47412 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:18:11.315449 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:18:11.327425 systemd-logind[1468]: New session 5 of user core. Jan 29 16:18:11.336950 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:18:12.102610 sshd[1640]: Connection closed by 172.24.4.1 port 47412 Jan 29 16:18:12.101385 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jan 29 16:18:12.121237 systemd[1]: sshd@2-172.24.4.227:22-172.24.4.1:47412.service: Deactivated successfully. Jan 29 16:18:12.124998 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:18:12.128629 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:18:12.134988 systemd[1]: Started sshd@3-172.24.4.227:22-172.24.4.1:47420.service - OpenSSH per-connection server daemon (172.24.4.1:47420). Jan 29 16:18:12.138485 systemd-logind[1468]: Removed session 5. Jan 29 16:18:13.636026 sshd[1645]: Accepted publickey for core from 172.24.4.1 port 47420 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:18:13.638905 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:18:13.651451 systemd-logind[1468]: New session 6 of user core. Jan 29 16:18:13.662674 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:18:14.390681 sshd[1648]: Connection closed by 172.24.4.1 port 47420 Jan 29 16:18:14.389604 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jan 29 16:18:14.409553 systemd[1]: sshd@3-172.24.4.227:22-172.24.4.1:47420.service: Deactivated successfully. Jan 29 16:18:14.412981 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:18:14.416794 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:18:14.423125 systemd[1]: Started sshd@4-172.24.4.227:22-172.24.4.1:59174.service - OpenSSH per-connection server daemon (172.24.4.1:59174). Jan 29 16:18:14.426140 systemd-logind[1468]: Removed session 6. Jan 29 16:18:15.740220 sshd[1653]: Accepted publickey for core from 172.24.4.1 port 59174 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:18:15.743045 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:18:15.755052 systemd-logind[1468]: New session 7 of user core. Jan 29 16:18:15.764700 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:18:16.249240 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:18:16.250679 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:18:16.268835 sudo[1657]: pam_unix(sudo:session): session closed for user root Jan 29 16:18:16.506520 sshd[1656]: Connection closed by 172.24.4.1 port 59174 Jan 29 16:18:16.507775 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jan 29 16:18:16.524864 systemd[1]: sshd@4-172.24.4.227:22-172.24.4.1:59174.service: Deactivated successfully. Jan 29 16:18:16.528240 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:18:16.530187 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:18:16.537954 systemd[1]: Started sshd@5-172.24.4.227:22-172.24.4.1:59178.service - OpenSSH per-connection server daemon (172.24.4.1:59178). Jan 29 16:18:16.541733 systemd-logind[1468]: Removed session 7. Jan 29 16:18:17.717920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:18:17.727762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:17.900040 sshd[1662]: Accepted publickey for core from 172.24.4.1 port 59178 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:18:17.902546 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:18:17.921956 systemd-logind[1468]: New session 8 of user core. Jan 29 16:18:17.927957 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:18:18.009567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:18.014230 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:18:18.240119 kubelet[1674]: E0129 16:18:18.240016 1674 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:18:18.249527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:18:18.249695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:18:18.250394 systemd[1]: kubelet.service: Consumed 320ms CPU time, 97.5M memory peak. Jan 29 16:18:18.360493 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:18:18.361192 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:18:18.370113 sudo[1683]: pam_unix(sudo:session): session closed for user root Jan 29 16:18:18.383689 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:18:18.384456 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:18:18.412859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:18:18.466234 augenrules[1705]: No rules Jan 29 16:18:18.467832 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:18:18.468099 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:18:18.470312 sudo[1682]: pam_unix(sudo:session): session closed for user root Jan 29 16:18:18.667045 sshd[1668]: Connection closed by 172.24.4.1 port 59178 Jan 29 16:18:18.668797 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Jan 29 16:18:18.684252 systemd[1]: sshd@5-172.24.4.227:22-172.24.4.1:59178.service: Deactivated successfully. Jan 29 16:18:18.688290 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:18:18.690471 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:18:18.699927 systemd[1]: Started sshd@6-172.24.4.227:22-172.24.4.1:59194.service - OpenSSH per-connection server daemon (172.24.4.1:59194). Jan 29 16:18:18.702633 systemd-logind[1468]: Removed session 8. Jan 29 16:18:19.834376 sshd[1713]: Accepted publickey for core from 172.24.4.1 port 59194 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:18:19.837312 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:18:19.863128 systemd-logind[1468]: New session 9 of user core. Jan 29 16:18:19.874714 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:18:20.184215 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:18:20.184912 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:18:20.896694 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:18:20.896906 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:18:21.688004 dockerd[1733]: time="2025-01-29T16:18:21.687914655Z" level=info msg="Starting up" Jan 29 16:18:21.820639 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3687050804-merged.mount: Deactivated successfully. Jan 29 16:18:21.853894 systemd[1]: var-lib-docker-metacopy\x2dcheck3142752510-merged.mount: Deactivated successfully. Jan 29 16:18:21.897340 dockerd[1733]: time="2025-01-29T16:18:21.897045868Z" level=info msg="Loading containers: start." Jan 29 16:18:22.096463 kernel: Initializing XFRM netlink socket Jan 29 16:18:22.222033 systemd-networkd[1384]: docker0: Link UP Jan 29 16:18:22.262448 dockerd[1733]: time="2025-01-29T16:18:22.262388400Z" level=info msg="Loading containers: done." Jan 29 16:18:22.289461 dockerd[1733]: time="2025-01-29T16:18:22.289275235Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:18:22.289710 dockerd[1733]: time="2025-01-29T16:18:22.289514960Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:18:22.289777 dockerd[1733]: time="2025-01-29T16:18:22.289733061Z" level=info msg="Daemon has completed initialization" Jan 29 16:18:22.354181 dockerd[1733]: time="2025-01-29T16:18:22.353485461Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:18:22.353815 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:18:24.273228 containerd[1485]: time="2025-01-29T16:18:24.273107836Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 16:18:24.967415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268347396.mount: Deactivated successfully. Jan 29 16:18:27.052814 containerd[1485]: time="2025-01-29T16:18:27.052755054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:27.055394 containerd[1485]: time="2025-01-29T16:18:27.055351202Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 29 16:18:27.057237 containerd[1485]: time="2025-01-29T16:18:27.057190423Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:27.060728 containerd[1485]: time="2025-01-29T16:18:27.060685451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:27.062082 containerd[1485]: time="2025-01-29T16:18:27.061895048Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.788720429s" Jan 29 16:18:27.062082 containerd[1485]: time="2025-01-29T16:18:27.061941192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 16:18:27.088705 containerd[1485]: time="2025-01-29T16:18:27.088395497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 16:18:28.466816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:18:28.475740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:28.634527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:28.640110 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:18:28.926398 kubelet[1996]: E0129 16:18:28.926235 1996 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:18:28.930417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:18:28.930614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:18:28.931067 systemd[1]: kubelet.service: Consumed 203ms CPU time, 95.8M memory peak. Jan 29 16:18:29.608755 containerd[1485]: time="2025-01-29T16:18:29.608651094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:29.612621 containerd[1485]: time="2025-01-29T16:18:29.612489028Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 29 16:18:29.613911 containerd[1485]: time="2025-01-29T16:18:29.613793844Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:29.621400 containerd[1485]: time="2025-01-29T16:18:29.621286451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:29.625615 containerd[1485]: time="2025-01-29T16:18:29.625260107Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.536787761s" Jan 29 16:18:29.625615 containerd[1485]: time="2025-01-29T16:18:29.625412193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 16:18:29.684965 containerd[1485]: time="2025-01-29T16:18:29.684258516Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 16:18:36.288352 containerd[1485]: time="2025-01-29T16:18:36.288223993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:36.289858 containerd[1485]: time="2025-01-29T16:18:36.289628729Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 29 16:18:36.292350 containerd[1485]: time="2025-01-29T16:18:36.290949862Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:36.294368 containerd[1485]: time="2025-01-29T16:18:36.294317204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:36.295467 containerd[1485]: time="2025-01-29T16:18:36.295435648Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 6.611108181s" Jan 29 16:18:36.295517 containerd[1485]: time="2025-01-29T16:18:36.295465958Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 16:18:36.320635 containerd[1485]: time="2025-01-29T16:18:36.320599379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:18:38.080940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201706679.mount: Deactivated successfully. Jan 29 16:18:38.576373 containerd[1485]: time="2025-01-29T16:18:38.575933340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:38.577693 containerd[1485]: time="2025-01-29T16:18:38.577650279Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 16:18:38.579189 containerd[1485]: time="2025-01-29T16:18:38.579152441Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:38.581730 containerd[1485]: time="2025-01-29T16:18:38.581672629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:38.583048 containerd[1485]: time="2025-01-29T16:18:38.582314382Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.261678659s" Jan 29 16:18:38.583048 containerd[1485]: time="2025-01-29T16:18:38.582376212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 16:18:38.607401 containerd[1485]: time="2025-01-29T16:18:38.607373203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:18:38.966641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:18:38.979771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:39.158946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:39.163092 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:18:39.296883 kubelet[2039]: E0129 16:18:39.296717 2039 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:18:39.299500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:18:39.299644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:18:39.300039 systemd[1]: kubelet.service: Consumed 233ms CPU time, 97.9M memory peak. Jan 29 16:18:39.665082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1497683076.mount: Deactivated successfully. Jan 29 16:18:40.865073 containerd[1485]: time="2025-01-29T16:18:40.864996358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:40.868007 containerd[1485]: time="2025-01-29T16:18:40.867952313Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 16:18:40.869563 containerd[1485]: time="2025-01-29T16:18:40.869484300Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:40.877614 containerd[1485]: time="2025-01-29T16:18:40.877523892Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.269977226s" Jan 29 16:18:40.877614 containerd[1485]: time="2025-01-29T16:18:40.877575457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:18:40.877816 containerd[1485]: time="2025-01-29T16:18:40.877545981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:40.903016 containerd[1485]: time="2025-01-29T16:18:40.902960488Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 16:18:41.613396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1411988822.mount: Deactivated successfully. Jan 29 16:18:41.623279 containerd[1485]: time="2025-01-29T16:18:41.623177188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:41.625423 containerd[1485]: time="2025-01-29T16:18:41.625288775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 29 16:18:41.628406 containerd[1485]: time="2025-01-29T16:18:41.627054379Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:41.635044 containerd[1485]: time="2025-01-29T16:18:41.634964518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:41.637143 containerd[1485]: time="2025-01-29T16:18:41.637055310Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 734.047296ms" Jan 29 16:18:41.637143 containerd[1485]: time="2025-01-29T16:18:41.637125214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 16:18:41.697664 containerd[1485]: time="2025-01-29T16:18:41.697576176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 16:18:42.291025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3305984387.mount: Deactivated successfully. Jan 29 16:18:45.098590 containerd[1485]: time="2025-01-29T16:18:45.098533907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:45.102099 containerd[1485]: time="2025-01-29T16:18:45.101614389Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 29 16:18:45.103629 containerd[1485]: time="2025-01-29T16:18:45.103575572Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:45.110348 containerd[1485]: time="2025-01-29T16:18:45.108841026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:18:45.113714 containerd[1485]: time="2025-01-29T16:18:45.113686028Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.415784866s" Jan 29 16:18:45.113811 containerd[1485]: time="2025-01-29T16:18:45.113795644Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 16:18:46.500495 update_engine[1470]: I20250129 16:18:46.499368 1470 update_attempter.cc:509] Updating boot flags... Jan 29 16:18:46.569009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2214) Jan 29 16:18:46.757357 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2213) Jan 29 16:18:49.467071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:18:49.476780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:49.787178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:49.798631 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:18:49.913420 kubelet[2229]: E0129 16:18:49.913373 2229 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:18:49.916051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:18:49.916252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:18:49.916776 systemd[1]: kubelet.service: Consumed 271ms CPU time, 94.2M memory peak. Jan 29 16:18:50.426441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:50.426867 systemd[1]: kubelet.service: Consumed 271ms CPU time, 94.2M memory peak. Jan 29 16:18:50.440874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:50.488533 systemd[1]: Reload requested from client PID 2243 ('systemctl') (unit session-9.scope)... Jan 29 16:18:50.488579 systemd[1]: Reloading... Jan 29 16:18:50.600417 zram_generator::config[2292]: No configuration found. Jan 29 16:18:50.960543 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:18:51.078424 systemd[1]: Reloading finished in 588 ms. Jan 29 16:18:51.136097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:51.141862 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:51.147934 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:18:51.148232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:51.148429 systemd[1]: kubelet.service: Consumed 101ms CPU time, 83.3M memory peak. Jan 29 16:18:51.153990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:51.268637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:51.271381 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:18:51.326592 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:18:51.326943 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:18:51.326999 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:18:51.327114 kubelet[2358]: I0129 16:18:51.327087 2358 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:18:51.664255 kubelet[2358]: I0129 16:18:51.664131 2358 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:18:51.664255 kubelet[2358]: I0129 16:18:51.664196 2358 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:18:51.664884 kubelet[2358]: I0129 16:18:51.664838 2358 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:18:51.709467 kubelet[2358]: I0129 16:18:51.709185 2358 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:18:51.709467 kubelet[2358]: E0129 16:18:51.709430 2358 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.227:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.727503 kubelet[2358]: I0129 16:18:51.727455 2358 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:18:51.728152 kubelet[2358]: I0129 16:18:51.728099 2358 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:18:51.728925 kubelet[2358]: I0129 16:18:51.728309 2358 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-a-7095f58259.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:18:51.749940 kubelet[2358]: I0129 16:18:51.749843 2358 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:18:51.749940 kubelet[2358]: I0129 16:18:51.749896 2358 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:18:51.750268 kubelet[2358]: I0129 16:18:51.750124 2358 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:18:51.752304 kubelet[2358]: I0129 16:18:51.752234 2358 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:18:51.752304 kubelet[2358]: I0129 16:18:51.752280 2358 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:18:51.753467 kubelet[2358]: I0129 16:18:51.752364 2358 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:18:51.753467 kubelet[2358]: I0129 16:18:51.752432 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:18:51.765228 kubelet[2358]: I0129 16:18:51.765175 2358 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:18:51.770473 kubelet[2358]: I0129 16:18:51.768985 2358 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:18:51.770473 kubelet[2358]: W0129 16:18:51.769085 2358 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:18:51.770473 kubelet[2358]: I0129 16:18:51.770259 2358 server.go:1264] "Started kubelet" Jan 29 16:18:51.770810 kubelet[2358]: W0129 16:18:51.770553 2358 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.770810 kubelet[2358]: E0129 16:18:51.770651 2358 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.789218 kubelet[2358]: I0129 16:18:51.789131 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:18:51.804112 kubelet[2358]: I0129 16:18:51.801575 2358 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:18:51.804112 kubelet[2358]: W0129 16:18:51.801771 2358 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-a-7095f58259.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.804112 kubelet[2358]: E0129 16:18:51.801907 2358 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-a-7095f58259.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.804112 kubelet[2358]: I0129 16:18:51.802965 2358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:18:51.804112 kubelet[2358]: I0129 16:18:51.803950 2358 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:18:51.805251 kubelet[2358]: I0129 16:18:51.805218 2358 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:18:51.806121 kubelet[2358]: I0129 16:18:51.806066 2358 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:18:51.810850 kubelet[2358]: E0129 16:18:51.810787 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-a-7095f58259.novalocal?timeout=10s\": dial tcp 172.24.4.227:6443: connect: connection refused" interval="200ms" Jan 29 16:18:51.811307 kubelet[2358]: I0129 16:18:51.811280 2358 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:18:51.811824 kubelet[2358]: E0129 16:18:51.811603 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.227:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.227:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-a-7095f58259.novalocal.181f362933480f23 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-a-7095f58259.novalocal,UID:ci-4230-0-0-a-7095f58259.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-a-7095f58259.novalocal,},FirstTimestamp:2025-01-29 16:18:51.770220323 +0000 UTC m=+0.493865813,LastTimestamp:2025-01-29 16:18:51.770220323 +0000 UTC m=+0.493865813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-a-7095f58259.novalocal,}" Jan 29 16:18:51.812847 kubelet[2358]: W0129 16:18:51.812770 2358 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.813046 kubelet[2358]: E0129 16:18:51.813020 2358 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.813293 kubelet[2358]: I0129 16:18:51.813236 2358 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:18:51.814909 kubelet[2358]: I0129 16:18:51.814879 2358 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:18:51.815228 kubelet[2358]: I0129 16:18:51.815187 2358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:18:51.819394 kubelet[2358]: I0129 16:18:51.819105 2358 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:18:51.826967 kubelet[2358]: I0129 16:18:51.826895 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:18:51.829452 kubelet[2358]: I0129 16:18:51.827769 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:18:51.829452 kubelet[2358]: I0129 16:18:51.827795 2358 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:18:51.829452 kubelet[2358]: I0129 16:18:51.827818 2358 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:18:51.829452 kubelet[2358]: E0129 16:18:51.827857 2358 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:18:51.833433 kubelet[2358]: W0129 16:18:51.833298 2358 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.833433 kubelet[2358]: E0129 16:18:51.833399 2358 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:51.837070 kubelet[2358]: E0129 16:18:51.837050 2358 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:18:51.851870 kubelet[2358]: I0129 16:18:51.851817 2358 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:18:51.851870 kubelet[2358]: I0129 16:18:51.851836 2358 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:18:51.851870 kubelet[2358]: I0129 16:18:51.851856 2358 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:18:51.856953 kubelet[2358]: I0129 16:18:51.856923 2358 policy_none.go:49] "None policy: Start" Jan 29 16:18:51.857727 kubelet[2358]: I0129 16:18:51.857709 2358 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:18:51.857783 kubelet[2358]: I0129 16:18:51.857751 2358 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:18:51.865930 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:18:51.879928 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:18:51.884413 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:18:51.893489 kubelet[2358]: I0129 16:18:51.893049 2358 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:18:51.893489 kubelet[2358]: I0129 16:18:51.893263 2358 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:18:51.893489 kubelet[2358]: I0129 16:18:51.893407 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:18:51.894797 kubelet[2358]: E0129 16:18:51.894782 2358 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-a-7095f58259.novalocal\" not found" Jan 29 16:18:51.908690 kubelet[2358]: I0129 16:18:51.908653 2358 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:51.909009 kubelet[2358]: E0129 16:18:51.908985 2358 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.227:6443/api/v1/nodes\": dial tcp 172.24.4.227:6443: connect: connection refused" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:51.928794 kubelet[2358]: I0129 16:18:51.928682 2358 topology_manager.go:215] "Topology Admit Handler" podUID="5deb6e678b11ef85d8aefc8dc6bfc807" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:51.930900 kubelet[2358]: I0129 16:18:51.930292 2358 topology_manager.go:215] "Topology Admit Handler" podUID="0810c2d46e34d972fdd471f5b873218a" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:51.932004 kubelet[2358]: I0129 16:18:51.931976 2358 topology_manager.go:215] "Topology Admit Handler" podUID="2b6428bcd0069436bdd358f2a3c6d3cf" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:51.939911 systemd[1]: Created slice kubepods-burstable-pod5deb6e678b11ef85d8aefc8dc6bfc807.slice - libcontainer container kubepods-burstable-pod5deb6e678b11ef85d8aefc8dc6bfc807.slice. Jan 29 16:18:51.960753 systemd[1]: Created slice kubepods-burstable-pod0810c2d46e34d972fdd471f5b873218a.slice - libcontainer container kubepods-burstable-pod0810c2d46e34d972fdd471f5b873218a.slice. Jan 29 16:18:51.967439 systemd[1]: Created slice kubepods-burstable-pod2b6428bcd0069436bdd358f2a3c6d3cf.slice - libcontainer container kubepods-burstable-pod2b6428bcd0069436bdd358f2a3c6d3cf.slice. Jan 29 16:18:52.012858 kubelet[2358]: E0129 16:18:52.012762 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-a-7095f58259.novalocal?timeout=10s\": dial tcp 172.24.4.227:6443: connect: connection refused" interval="400ms" Jan 29 16:18:52.015066 kubelet[2358]: I0129 16:18:52.015001 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5deb6e678b11ef85d8aefc8dc6bfc807-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"5deb6e678b11ef85d8aefc8dc6bfc807\") " pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.015204 kubelet[2358]: I0129 16:18:52.015096 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5deb6e678b11ef85d8aefc8dc6bfc807-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"5deb6e678b11ef85d8aefc8dc6bfc807\") " pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.015204 kubelet[2358]: I0129 16:18:52.015153 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.015385 kubelet[2358]: I0129 16:18:52.015204 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b6428bcd0069436bdd358f2a3c6d3cf-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"2b6428bcd0069436bdd358f2a3c6d3cf\") " pod="kube-system/kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.015385 kubelet[2358]: I0129 16:18:52.015253 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5deb6e678b11ef85d8aefc8dc6bfc807-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"5deb6e678b11ef85d8aefc8dc6bfc807\") " pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.015385 kubelet[2358]: I0129 16:18:52.015295 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.015667 kubelet[2358]: I0129 16:18:52.015619 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.015785 kubelet[2358]: I0129 16:18:52.015753 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.015978 kubelet[2358]: I0129 16:18:52.015876 2358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.113034 kubelet[2358]: I0129 16:18:52.112981 2358 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.113778 kubelet[2358]: E0129 16:18:52.113713 2358 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.227:6443/api/v1/nodes\": dial tcp 172.24.4.227:6443: connect: connection refused" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.258929 containerd[1485]: time="2025-01-29T16:18:52.258818913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal,Uid:5deb6e678b11ef85d8aefc8dc6bfc807,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:52.266795 containerd[1485]: time="2025-01-29T16:18:52.266705793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal,Uid:0810c2d46e34d972fdd471f5b873218a,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:52.273761 containerd[1485]: time="2025-01-29T16:18:52.273608601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal,Uid:2b6428bcd0069436bdd358f2a3c6d3cf,Namespace:kube-system,Attempt:0,}" Jan 29 16:18:52.413758 kubelet[2358]: E0129 16:18:52.413640 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-a-7095f58259.novalocal?timeout=10s\": dial tcp 172.24.4.227:6443: connect: connection refused" interval="800ms" Jan 29 16:18:52.517809 kubelet[2358]: I0129 16:18:52.517673 2358 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.518863 kubelet[2358]: E0129 16:18:52.518763 2358 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.227:6443/api/v1/nodes\": dial tcp 172.24.4.227:6443: connect: connection refused" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:52.631939 kubelet[2358]: W0129 16:18:52.631782 2358 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:52.631939 kubelet[2358]: E0129 16:18:52.631915 2358 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:52.974831 kubelet[2358]: W0129 16:18:52.974686 2358 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-a-7095f58259.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:52.974831 kubelet[2358]: E0129 16:18:52.974794 2358 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-a-7095f58259.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:53.007908 kubelet[2358]: W0129 16:18:53.007806 2358 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:53.007908 kubelet[2358]: E0129 16:18:53.007905 2358 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:53.214862 kubelet[2358]: E0129 16:18:53.214778 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-a-7095f58259.novalocal?timeout=10s\": dial tcp 172.24.4.227:6443: connect: connection refused" interval="1.6s" Jan 29 16:18:53.241543 kubelet[2358]: W0129 16:18:53.241293 2358 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:53.241543 kubelet[2358]: E0129 16:18:53.241410 2358 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:53.284674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922436270.mount: Deactivated successfully. Jan 29 16:18:53.298605 containerd[1485]: time="2025-01-29T16:18:53.298478189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:53.302513 containerd[1485]: time="2025-01-29T16:18:53.302392842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 16:18:53.310390 containerd[1485]: time="2025-01-29T16:18:53.308801193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:53.311286 containerd[1485]: time="2025-01-29T16:18:53.311233407Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:53.313553 containerd[1485]: time="2025-01-29T16:18:53.313454696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:18:53.315629 containerd[1485]: time="2025-01-29T16:18:53.315547992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:18:53.315761 containerd[1485]: time="2025-01-29T16:18:53.315719405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:53.322943 kubelet[2358]: I0129 16:18:53.322885 2358 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:53.325237 kubelet[2358]: E0129 16:18:53.325178 2358 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.227:6443/api/v1/nodes\": dial tcp 172.24.4.227:6443: connect: connection refused" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:53.328674 containerd[1485]: time="2025-01-29T16:18:53.328612405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:18:53.333979 containerd[1485]: time="2025-01-29T16:18:53.332940478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.066053571s" Jan 29 16:18:53.338841 containerd[1485]: time="2025-01-29T16:18:53.338765530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.064880619s" Jan 29 16:18:53.340147 containerd[1485]: time="2025-01-29T16:18:53.339780779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.080752926s" Jan 29 16:18:53.518193 containerd[1485]: time="2025-01-29T16:18:53.517924721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:53.518979 containerd[1485]: time="2025-01-29T16:18:53.518092456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:53.518979 containerd[1485]: time="2025-01-29T16:18:53.518151427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:53.520201 containerd[1485]: time="2025-01-29T16:18:53.520061155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:53.527798 containerd[1485]: time="2025-01-29T16:18:53.527597017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:53.528640 containerd[1485]: time="2025-01-29T16:18:53.528357832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:53.528640 containerd[1485]: time="2025-01-29T16:18:53.528376219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:53.528640 containerd[1485]: time="2025-01-29T16:18:53.528457717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:53.539275 containerd[1485]: time="2025-01-29T16:18:53.539009683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:18:53.539275 containerd[1485]: time="2025-01-29T16:18:53.539183280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:18:53.539684 containerd[1485]: time="2025-01-29T16:18:53.539232712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:53.541522 containerd[1485]: time="2025-01-29T16:18:53.540904320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:18:53.563526 systemd[1]: Started cri-containerd-1510cb45d2efd99200aa6321fb7e1c0008a862d8eae6bdb3305683049cea9c53.scope - libcontainer container 1510cb45d2efd99200aa6321fb7e1c0008a862d8eae6bdb3305683049cea9c53. Jan 29 16:18:53.568893 systemd[1]: Started cri-containerd-50952a8cf5478d6299597f765493e22f09e38aa8706f1913366309e2fe229120.scope - libcontainer container 50952a8cf5478d6299597f765493e22f09e38aa8706f1913366309e2fe229120. Jan 29 16:18:53.587463 systemd[1]: Started cri-containerd-9047f645b98882c568b48b2b1a27f9c4945772aac01ec9090b4cfe1889879994.scope - libcontainer container 9047f645b98882c568b48b2b1a27f9c4945772aac01ec9090b4cfe1889879994. Jan 29 16:18:53.634248 containerd[1485]: time="2025-01-29T16:18:53.634106147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal,Uid:5deb6e678b11ef85d8aefc8dc6bfc807,Namespace:kube-system,Attempt:0,} returns sandbox id \"50952a8cf5478d6299597f765493e22f09e38aa8706f1913366309e2fe229120\"" Jan 29 16:18:53.641780 containerd[1485]: time="2025-01-29T16:18:53.641571793Z" level=info msg="CreateContainer within sandbox \"50952a8cf5478d6299597f765493e22f09e38aa8706f1913366309e2fe229120\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:18:53.660646 containerd[1485]: time="2025-01-29T16:18:53.660581396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal,Uid:0810c2d46e34d972fdd471f5b873218a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9047f645b98882c568b48b2b1a27f9c4945772aac01ec9090b4cfe1889879994\"" Jan 29 16:18:53.666192 containerd[1485]: time="2025-01-29T16:18:53.666036337Z" level=info msg="CreateContainer within sandbox \"9047f645b98882c568b48b2b1a27f9c4945772aac01ec9090b4cfe1889879994\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:18:53.666807 containerd[1485]: time="2025-01-29T16:18:53.666685562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal,Uid:2b6428bcd0069436bdd358f2a3c6d3cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"1510cb45d2efd99200aa6321fb7e1c0008a862d8eae6bdb3305683049cea9c53\"" Jan 29 16:18:53.670939 containerd[1485]: time="2025-01-29T16:18:53.670854027Z" level=info msg="CreateContainer within sandbox \"1510cb45d2efd99200aa6321fb7e1c0008a862d8eae6bdb3305683049cea9c53\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:18:53.678311 containerd[1485]: time="2025-01-29T16:18:53.678273719Z" level=info msg="CreateContainer within sandbox \"50952a8cf5478d6299597f765493e22f09e38aa8706f1913366309e2fe229120\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7738cc6fd4257eb266467b743bb7881e04b6919fafdcc1612ee6d32a42c34aff\"" Jan 29 16:18:53.679488 containerd[1485]: time="2025-01-29T16:18:53.679361637Z" level=info msg="StartContainer for \"7738cc6fd4257eb266467b743bb7881e04b6919fafdcc1612ee6d32a42c34aff\"" Jan 29 16:18:53.699550 containerd[1485]: time="2025-01-29T16:18:53.699389795Z" level=info msg="CreateContainer within sandbox \"9047f645b98882c568b48b2b1a27f9c4945772aac01ec9090b4cfe1889879994\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce36ae4166613ba06f6d807727728bd385b673d51e7af85578c7fdff1d31c6a7\"" Jan 29 16:18:53.700624 containerd[1485]: time="2025-01-29T16:18:53.700509457Z" level=info msg="StartContainer for \"ce36ae4166613ba06f6d807727728bd385b673d51e7af85578c7fdff1d31c6a7\"" Jan 29 16:18:53.710018 containerd[1485]: time="2025-01-29T16:18:53.709842777Z" level=info msg="CreateContainer within sandbox \"1510cb45d2efd99200aa6321fb7e1c0008a862d8eae6bdb3305683049cea9c53\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"427d68847d67a9aa73ba2ab9bcea6631ea3e23064682520d6f6654162ff2081b\"" Jan 29 16:18:53.710484 systemd[1]: Started cri-containerd-7738cc6fd4257eb266467b743bb7881e04b6919fafdcc1612ee6d32a42c34aff.scope - libcontainer container 7738cc6fd4257eb266467b743bb7881e04b6919fafdcc1612ee6d32a42c34aff. Jan 29 16:18:53.712202 containerd[1485]: time="2025-01-29T16:18:53.711933427Z" level=info msg="StartContainer for \"427d68847d67a9aa73ba2ab9bcea6631ea3e23064682520d6f6654162ff2081b\"" Jan 29 16:18:53.744513 systemd[1]: Started cri-containerd-ce36ae4166613ba06f6d807727728bd385b673d51e7af85578c7fdff1d31c6a7.scope - libcontainer container ce36ae4166613ba06f6d807727728bd385b673d51e7af85578c7fdff1d31c6a7. Jan 29 16:18:53.764550 systemd[1]: Started cri-containerd-427d68847d67a9aa73ba2ab9bcea6631ea3e23064682520d6f6654162ff2081b.scope - libcontainer container 427d68847d67a9aa73ba2ab9bcea6631ea3e23064682520d6f6654162ff2081b. Jan 29 16:18:53.780111 kubelet[2358]: E0129 16:18:53.778609 2358 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.227:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.227:6443: connect: connection refused Jan 29 16:18:53.789571 containerd[1485]: time="2025-01-29T16:18:53.789524522Z" level=info msg="StartContainer for \"7738cc6fd4257eb266467b743bb7881e04b6919fafdcc1612ee6d32a42c34aff\" returns successfully" Jan 29 16:18:53.837549 containerd[1485]: time="2025-01-29T16:18:53.836492114Z" level=info msg="StartContainer for \"ce36ae4166613ba06f6d807727728bd385b673d51e7af85578c7fdff1d31c6a7\" returns successfully" Jan 29 16:18:53.868655 containerd[1485]: time="2025-01-29T16:18:53.867556534Z" level=info msg="StartContainer for \"427d68847d67a9aa73ba2ab9bcea6631ea3e23064682520d6f6654162ff2081b\" returns successfully" Jan 29 16:18:54.928423 kubelet[2358]: I0129 16:18:54.927806 2358 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:55.766168 kubelet[2358]: I0129 16:18:55.766110 2358 apiserver.go:52] "Watching apiserver" Jan 29 16:18:55.811941 kubelet[2358]: I0129 16:18:55.811857 2358 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:18:55.820266 kubelet[2358]: E0129 16:18:55.820208 2358 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-0-a-7095f58259.novalocal\" not found" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:55.866623 kubelet[2358]: I0129 16:18:55.866436 2358 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:57.383659 kubelet[2358]: W0129 16:18:57.383599 2358 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:18:58.030145 systemd[1]: Reload requested from client PID 2636 ('systemctl') (unit session-9.scope)... Jan 29 16:18:58.030449 systemd[1]: Reloading... Jan 29 16:18:58.139447 zram_generator::config[2682]: No configuration found. Jan 29 16:18:58.297021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:18:58.444771 systemd[1]: Reloading finished in 413 ms. Jan 29 16:18:58.479295 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:58.480712 kubelet[2358]: I0129 16:18:58.480654 2358 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:18:58.488266 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:18:58.488543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:58.488601 systemd[1]: kubelet.service: Consumed 990ms CPU time, 115.2M memory peak. Jan 29 16:18:58.493772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:18:58.645042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:18:58.650479 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:18:58.789954 kubelet[2746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:18:58.789954 kubelet[2746]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:18:58.789954 kubelet[2746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:18:58.790373 kubelet[2746]: I0129 16:18:58.790068 2746 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:18:58.797304 kubelet[2746]: I0129 16:18:58.797257 2746 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:18:58.797304 kubelet[2746]: I0129 16:18:58.797303 2746 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:18:58.797779 kubelet[2746]: I0129 16:18:58.797739 2746 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:18:58.800976 kubelet[2746]: I0129 16:18:58.800940 2746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:18:58.806606 kubelet[2746]: I0129 16:18:58.804825 2746 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:18:58.813508 kubelet[2746]: I0129 16:18:58.813486 2746 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:18:58.813877 kubelet[2746]: I0129 16:18:58.813853 2746 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:18:58.814115 kubelet[2746]: I0129 16:18:58.813934 2746 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-a-7095f58259.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:18:58.814263 kubelet[2746]: I0129 16:18:58.814246 2746 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:18:58.814345 kubelet[2746]: I0129 16:18:58.814314 2746 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:18:58.814487 kubelet[2746]: I0129 16:18:58.814458 2746 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:18:58.814639 kubelet[2746]: I0129 16:18:58.814629 2746 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:18:58.817127 kubelet[2746]: I0129 16:18:58.815489 2746 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:18:58.817127 kubelet[2746]: I0129 16:18:58.815573 2746 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:18:58.817127 kubelet[2746]: I0129 16:18:58.815630 2746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:18:58.825476 kubelet[2746]: I0129 16:18:58.825457 2746 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:18:58.827386 kubelet[2746]: I0129 16:18:58.826512 2746 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:18:58.827869 kubelet[2746]: I0129 16:18:58.827856 2746 server.go:1264] "Started kubelet" Jan 29 16:18:58.832268 kubelet[2746]: I0129 16:18:58.830769 2746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:18:58.839736 kubelet[2746]: I0129 16:18:58.839697 2746 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:18:58.840994 kubelet[2746]: I0129 16:18:58.840982 2746 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:18:58.841999 kubelet[2746]: I0129 16:18:58.841958 2746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:18:58.846285 kubelet[2746]: I0129 16:18:58.843143 2746 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:18:58.846285 kubelet[2746]: I0129 16:18:58.844267 2746 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:18:58.848824 kubelet[2746]: I0129 16:18:58.843142 2746 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:18:58.848824 kubelet[2746]: I0129 16:18:58.847606 2746 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:18:58.851035 kubelet[2746]: I0129 16:18:58.850996 2746 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:18:58.852028 kubelet[2746]: I0129 16:18:58.851990 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:18:58.852874 kubelet[2746]: E0129 16:18:58.852709 2746 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:18:58.857132 kubelet[2746]: I0129 16:18:58.857094 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:18:58.857132 kubelet[2746]: I0129 16:18:58.857136 2746 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:18:58.857246 kubelet[2746]: I0129 16:18:58.857150 2746 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:18:58.857246 kubelet[2746]: E0129 16:18:58.857192 2746 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:18:58.868841 kubelet[2746]: I0129 16:18:58.868445 2746 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:18:58.868841 kubelet[2746]: I0129 16:18:58.868467 2746 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:18:58.919470 kubelet[2746]: I0129 16:18:58.919035 2746 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:18:58.919470 kubelet[2746]: I0129 16:18:58.919053 2746 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:18:58.919470 kubelet[2746]: I0129 16:18:58.919070 2746 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:18:58.919470 kubelet[2746]: I0129 16:18:58.919235 2746 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:18:58.919470 kubelet[2746]: I0129 16:18:58.919246 2746 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:18:58.919470 kubelet[2746]: I0129 16:18:58.919263 2746 policy_none.go:49] "None policy: Start" Jan 29 16:18:58.920786 kubelet[2746]: I0129 16:18:58.920210 2746 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:18:58.920786 kubelet[2746]: I0129 16:18:58.920231 2746 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:18:58.920786 kubelet[2746]: I0129 16:18:58.920382 2746 state_mem.go:75] "Updated machine memory state" Jan 29 16:18:58.924680 kubelet[2746]: I0129 16:18:58.924663 2746 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:18:58.925218 kubelet[2746]: I0129 16:18:58.925104 2746 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:18:58.925912 kubelet[2746]: I0129 16:18:58.925890 2746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:18:58.947359 kubelet[2746]: I0129 16:18:58.947311 2746 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:58.957611 kubelet[2746]: I0129 16:18:58.957490 2746 topology_manager.go:215] "Topology Admit Handler" podUID="5deb6e678b11ef85d8aefc8dc6bfc807" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:58.957696 kubelet[2746]: I0129 16:18:58.957623 2746 topology_manager.go:215] "Topology Admit Handler" podUID="0810c2d46e34d972fdd471f5b873218a" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:58.957696 kubelet[2746]: I0129 16:18:58.957678 2746 topology_manager.go:215] "Topology Admit Handler" podUID="2b6428bcd0069436bdd358f2a3c6d3cf" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:58.958592 kubelet[2746]: I0129 16:18:58.958455 2746 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:58.958592 kubelet[2746]: I0129 16:18:58.958507 2746 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:58.964641 kubelet[2746]: W0129 16:18:58.964567 2746 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:18:58.969556 kubelet[2746]: W0129 16:18:58.969420 2746 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:18:58.970948 kubelet[2746]: W0129 16:18:58.970926 2746 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:18:58.971007 kubelet[2746]: E0129 16:18:58.970985 2746 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.023517 sudo[2777]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:18:59.024025 sudo[2777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:18:59.048955 kubelet[2746]: I0129 16:18:59.048616 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.048955 kubelet[2746]: I0129 16:18:59.048660 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.048955 kubelet[2746]: I0129 16:18:59.048698 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.048955 kubelet[2746]: I0129 16:18:59.048719 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b6428bcd0069436bdd358f2a3c6d3cf-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"2b6428bcd0069436bdd358f2a3c6d3cf\") " pod="kube-system/kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.048955 kubelet[2746]: I0129 16:18:59.048740 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5deb6e678b11ef85d8aefc8dc6bfc807-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"5deb6e678b11ef85d8aefc8dc6bfc807\") " pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.049185 kubelet[2746]: I0129 16:18:59.048758 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5deb6e678b11ef85d8aefc8dc6bfc807-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"5deb6e678b11ef85d8aefc8dc6bfc807\") " pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.049185 kubelet[2746]: I0129 16:18:59.048778 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.049185 kubelet[2746]: I0129 16:18:59.048799 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5deb6e678b11ef85d8aefc8dc6bfc807-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"5deb6e678b11ef85d8aefc8dc6bfc807\") " pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.049185 kubelet[2746]: I0129 16:18:59.048822 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0810c2d46e34d972fdd471f5b873218a-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal\" (UID: \"0810c2d46e34d972fdd471f5b873218a\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.567198 sudo[2777]: pam_unix(sudo:session): session closed for user root Jan 29 16:18:59.818818 kubelet[2746]: I0129 16:18:59.818726 2746 apiserver.go:52] "Watching apiserver" Jan 29 16:18:59.845357 kubelet[2746]: I0129 16:18:59.844954 2746 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:18:59.917270 kubelet[2746]: W0129 16:18:59.913776 2746 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:18:59.917270 kubelet[2746]: E0129 16:18:59.913850 2746 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" Jan 29 16:18:59.954263 kubelet[2746]: I0129 16:18:59.954014 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-a-7095f58259.novalocal" podStartSLOduration=1.953997609 podStartE2EDuration="1.953997609s" podCreationTimestamp="2025-01-29 16:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:59.95351746 +0000 UTC m=+1.297917824" watchObservedRunningTime="2025-01-29 16:18:59.953997609 +0000 UTC m=+1.298397973" Jan 29 16:18:59.954263 kubelet[2746]: I0129 16:18:59.954110 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-a-7095f58259.novalocal" podStartSLOduration=1.954105706 podStartE2EDuration="1.954105706s" podCreationTimestamp="2025-01-29 16:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:59.942812919 +0000 UTC m=+1.287213283" watchObservedRunningTime="2025-01-29 16:18:59.954105706 +0000 UTC m=+1.298506070" Jan 29 16:19:01.617920 sudo[1717]: pam_unix(sudo:session): session closed for user root Jan 29 16:19:01.903210 sshd[1716]: Connection closed by 172.24.4.1 port 59194 Jan 29 16:19:01.905584 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 29 16:19:01.914809 systemd[1]: sshd@6-172.24.4.227:22-172.24.4.1:59194.service: Deactivated successfully. Jan 29 16:19:01.919972 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:19:01.921803 systemd[1]: session-9.scope: Consumed 8.398s CPU time, 295.2M memory peak. Jan 29 16:19:01.928468 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:19:01.931090 systemd-logind[1468]: Removed session 9. Jan 29 16:19:08.057286 kubelet[2746]: I0129 16:19:08.057168 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-a-7095f58259.novalocal" podStartSLOduration=11.057131675 podStartE2EDuration="11.057131675s" podCreationTimestamp="2025-01-29 16:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:18:59.966280894 +0000 UTC m=+1.310681258" watchObservedRunningTime="2025-01-29 16:19:08.057131675 +0000 UTC m=+9.401532089" Jan 29 16:19:14.433912 kubelet[2746]: I0129 16:19:14.433653 2746 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:19:14.437141 containerd[1485]: time="2025-01-29T16:19:14.434827707Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:19:14.437705 kubelet[2746]: I0129 16:19:14.435116 2746 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:19:15.312222 kubelet[2746]: I0129 16:19:15.310699 2746 topology_manager.go:215] "Topology Admit Handler" podUID="1e3144be-f1fa-44f5-a8d4-4487b71dadce" podNamespace="kube-system" podName="kube-proxy-lm269" Jan 29 16:19:15.314886 kubelet[2746]: I0129 16:19:15.314825 2746 topology_manager.go:215] "Topology Admit Handler" podUID="bd1631d5-6ffe-4c15-a229-d8bae18e25e8" podNamespace="kube-system" podName="cilium-24qp7" Jan 29 16:19:15.324676 kubelet[2746]: W0129 16:19:15.324642 2746 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:19:15.325176 kubelet[2746]: E0129 16:19:15.325150 2746 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:19:15.325276 kubelet[2746]: W0129 16:19:15.325158 2746 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:19:15.325387 kubelet[2746]: E0129 16:19:15.325374 2746 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:19:15.325483 kubelet[2746]: W0129 16:19:15.325203 2746 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:19:15.325591 kubelet[2746]: E0129 16:19:15.325579 2746 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:19:15.325778 kubelet[2746]: W0129 16:19:15.325091 2746 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:19:15.325906 kubelet[2746]: E0129 16:19:15.325894 2746 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:19:15.334686 systemd[1]: Created slice kubepods-besteffort-pod1e3144be_f1fa_44f5_a8d4_4487b71dadce.slice - libcontainer container kubepods-besteffort-pod1e3144be_f1fa_44f5_a8d4_4487b71dadce.slice. Jan 29 16:19:15.344233 systemd[1]: Created slice kubepods-burstable-podbd1631d5_6ffe_4c15_a229_d8bae18e25e8.slice - libcontainer container kubepods-burstable-podbd1631d5_6ffe_4c15_a229_d8bae18e25e8.slice. Jan 29 16:19:15.363756 kubelet[2746]: I0129 16:19:15.363627 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e3144be-f1fa-44f5-a8d4-4487b71dadce-lib-modules\") pod \"kube-proxy-lm269\" (UID: \"1e3144be-f1fa-44f5-a8d4-4487b71dadce\") " pod="kube-system/kube-proxy-lm269" Jan 29 16:19:15.363756 kubelet[2746]: I0129 16:19:15.363686 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-etc-cni-netd\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.363756 kubelet[2746]: I0129 16:19:15.363708 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmt8d\" (UniqueName: \"kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-kube-api-access-bmt8d\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.364422 kubelet[2746]: I0129 16:19:15.363731 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e3144be-f1fa-44f5-a8d4-4487b71dadce-kube-proxy\") pod \"kube-proxy-lm269\" (UID: \"1e3144be-f1fa-44f5-a8d4-4487b71dadce\") " pod="kube-system/kube-proxy-lm269" Jan 29 16:19:15.364422 kubelet[2746]: I0129 16:19:15.364286 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc94n\" (UniqueName: \"kubernetes.io/projected/1e3144be-f1fa-44f5-a8d4-4487b71dadce-kube-api-access-vc94n\") pod \"kube-proxy-lm269\" (UID: \"1e3144be-f1fa-44f5-a8d4-4487b71dadce\") " pod="kube-system/kube-proxy-lm269" Jan 29 16:19:15.364422 kubelet[2746]: I0129 16:19:15.364312 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-run\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.364422 kubelet[2746]: I0129 16:19:15.364380 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-cgroup\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.364422 kubelet[2746]: I0129 16:19:15.364398 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-xtables-lock\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365341 kubelet[2746]: I0129 16:19:15.364769 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-host-proc-sys-net\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365341 kubelet[2746]: I0129 16:19:15.364799 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-host-proc-sys-kernel\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365341 kubelet[2746]: I0129 16:19:15.365229 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e3144be-f1fa-44f5-a8d4-4487b71dadce-xtables-lock\") pod \"kube-proxy-lm269\" (UID: \"1e3144be-f1fa-44f5-a8d4-4487b71dadce\") " pod="kube-system/kube-proxy-lm269" Jan 29 16:19:15.365341 kubelet[2746]: I0129 16:19:15.365253 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-lib-modules\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365341 kubelet[2746]: I0129 16:19:15.365305 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-clustermesh-secrets\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365695 kubelet[2746]: I0129 16:19:15.365548 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-bpf-maps\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365695 kubelet[2746]: I0129 16:19:15.365572 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hostproc\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365695 kubelet[2746]: I0129 16:19:15.365623 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cni-path\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365695 kubelet[2746]: I0129 16:19:15.365641 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-config-path\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.365962 kubelet[2746]: I0129 16:19:15.365796 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hubble-tls\") pod \"cilium-24qp7\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " pod="kube-system/cilium-24qp7" Jan 29 16:19:15.509286 kubelet[2746]: I0129 16:19:15.508806 2746 topology_manager.go:215] "Topology Admit Handler" podUID="e8956fda-f110-4260-aa68-ab3633f0b34b" podNamespace="kube-system" podName="cilium-operator-599987898-zwv62" Jan 29 16:19:15.517298 systemd[1]: Created slice kubepods-besteffort-pode8956fda_f110_4260_aa68_ab3633f0b34b.slice - libcontainer container kubepods-besteffort-pode8956fda_f110_4260_aa68_ab3633f0b34b.slice. Jan 29 16:19:15.567457 kubelet[2746]: I0129 16:19:15.567252 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7rg\" (UniqueName: \"kubernetes.io/projected/e8956fda-f110-4260-aa68-ab3633f0b34b-kube-api-access-cw7rg\") pod \"cilium-operator-599987898-zwv62\" (UID: \"e8956fda-f110-4260-aa68-ab3633f0b34b\") " pod="kube-system/cilium-operator-599987898-zwv62" Jan 29 16:19:15.567457 kubelet[2746]: I0129 16:19:15.567405 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8956fda-f110-4260-aa68-ab3633f0b34b-cilium-config-path\") pod \"cilium-operator-599987898-zwv62\" (UID: \"e8956fda-f110-4260-aa68-ab3633f0b34b\") " pod="kube-system/cilium-operator-599987898-zwv62" Jan 29 16:19:16.468731 kubelet[2746]: E0129 16:19:16.468658 2746 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 29 16:19:16.468731 kubelet[2746]: E0129 16:19:16.468710 2746 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-24qp7: failed to sync secret cache: timed out waiting for the condition Jan 29 16:19:16.469038 kubelet[2746]: E0129 16:19:16.468843 2746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hubble-tls podName:bd1631d5-6ffe-4c15-a229-d8bae18e25e8 nodeName:}" failed. No retries permitted until 2025-01-29 16:19:16.968802514 +0000 UTC m=+18.313202938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hubble-tls") pod "cilium-24qp7" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:19:16.469314 kubelet[2746]: E0129 16:19:16.469276 2746 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:19:16.469434 kubelet[2746]: E0129 16:19:16.469408 2746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e3144be-f1fa-44f5-a8d4-4487b71dadce-kube-proxy podName:1e3144be-f1fa-44f5-a8d4-4487b71dadce nodeName:}" failed. No retries permitted until 2025-01-29 16:19:16.969381638 +0000 UTC m=+18.313782062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1e3144be-f1fa-44f5-a8d4-4487b71dadce-kube-proxy") pod "kube-proxy-lm269" (UID: "1e3144be-f1fa-44f5-a8d4-4487b71dadce") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:19:16.488740 kubelet[2746]: E0129 16:19:16.488657 2746 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:19:16.488740 kubelet[2746]: E0129 16:19:16.488717 2746 projected.go:200] Error preparing data for projected volume kube-api-access-vc94n for pod kube-system/kube-proxy-lm269: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:19:16.489072 kubelet[2746]: E0129 16:19:16.488806 2746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e3144be-f1fa-44f5-a8d4-4487b71dadce-kube-api-access-vc94n podName:1e3144be-f1fa-44f5-a8d4-4487b71dadce nodeName:}" failed. No retries permitted until 2025-01-29 16:19:16.98877426 +0000 UTC m=+18.333174684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vc94n" (UniqueName: "kubernetes.io/projected/1e3144be-f1fa-44f5-a8d4-4487b71dadce-kube-api-access-vc94n") pod "kube-proxy-lm269" (UID: "1e3144be-f1fa-44f5-a8d4-4487b71dadce") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:19:16.489072 kubelet[2746]: E0129 16:19:16.488849 2746 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:19:16.489072 kubelet[2746]: E0129 16:19:16.488870 2746 projected.go:200] Error preparing data for projected volume kube-api-access-bmt8d for pod kube-system/cilium-24qp7: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:19:16.489072 kubelet[2746]: E0129 16:19:16.488924 2746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-kube-api-access-bmt8d podName:bd1631d5-6ffe-4c15-a229-d8bae18e25e8 nodeName:}" failed. No retries permitted until 2025-01-29 16:19:16.988903593 +0000 UTC m=+18.333304017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bmt8d" (UniqueName: "kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-kube-api-access-bmt8d") pod "cilium-24qp7" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:19:16.721770 containerd[1485]: time="2025-01-29T16:19:16.721607142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zwv62,Uid:e8956fda-f110-4260-aa68-ab3633f0b34b,Namespace:kube-system,Attempt:0,}" Jan 29 16:19:16.779151 containerd[1485]: time="2025-01-29T16:19:16.778893782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:19:16.779151 containerd[1485]: time="2025-01-29T16:19:16.778956725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:19:16.779151 containerd[1485]: time="2025-01-29T16:19:16.778976204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:16.779877 containerd[1485]: time="2025-01-29T16:19:16.779738116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:16.812537 systemd[1]: Started cri-containerd-aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4.scope - libcontainer container aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4. Jan 29 16:19:16.866536 containerd[1485]: time="2025-01-29T16:19:16.866447883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zwv62,Uid:e8956fda-f110-4260-aa68-ab3633f0b34b,Namespace:kube-system,Attempt:0,} returns sandbox id \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\"" Jan 29 16:19:16.870878 containerd[1485]: time="2025-01-29T16:19:16.870799647Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:19:17.142604 containerd[1485]: time="2025-01-29T16:19:17.142406123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lm269,Uid:1e3144be-f1fa-44f5-a8d4-4487b71dadce,Namespace:kube-system,Attempt:0,}" Jan 29 16:19:17.150221 containerd[1485]: time="2025-01-29T16:19:17.150146941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24qp7,Uid:bd1631d5-6ffe-4c15-a229-d8bae18e25e8,Namespace:kube-system,Attempt:0,}" Jan 29 16:19:17.197276 containerd[1485]: time="2025-01-29T16:19:17.197024881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:19:17.197276 containerd[1485]: time="2025-01-29T16:19:17.197098825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:19:17.197276 containerd[1485]: time="2025-01-29T16:19:17.197119476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:17.197493 containerd[1485]: time="2025-01-29T16:19:17.197219221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:17.203292 containerd[1485]: time="2025-01-29T16:19:17.202960345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:19:17.204170 containerd[1485]: time="2025-01-29T16:19:17.203571100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:19:17.204170 containerd[1485]: time="2025-01-29T16:19:17.203685103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:17.204170 containerd[1485]: time="2025-01-29T16:19:17.203900144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:17.221539 systemd[1]: Started cri-containerd-fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a.scope - libcontainer container fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a. Jan 29 16:19:17.243479 systemd[1]: Started cri-containerd-2da9354e2cd43a4c85d6b00f55ec9b02f0938efe3d8e6be2a1319636bdfb3901.scope - libcontainer container 2da9354e2cd43a4c85d6b00f55ec9b02f0938efe3d8e6be2a1319636bdfb3901. Jan 29 16:19:17.279850 containerd[1485]: time="2025-01-29T16:19:17.279621334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24qp7,Uid:bd1631d5-6ffe-4c15-a229-d8bae18e25e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\"" Jan 29 16:19:17.288457 containerd[1485]: time="2025-01-29T16:19:17.288143079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lm269,Uid:1e3144be-f1fa-44f5-a8d4-4487b71dadce,Namespace:kube-system,Attempt:0,} returns sandbox id \"2da9354e2cd43a4c85d6b00f55ec9b02f0938efe3d8e6be2a1319636bdfb3901\"" Jan 29 16:19:17.292250 containerd[1485]: time="2025-01-29T16:19:17.292203924Z" level=info msg="CreateContainer within sandbox \"2da9354e2cd43a4c85d6b00f55ec9b02f0938efe3d8e6be2a1319636bdfb3901\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:19:17.319986 containerd[1485]: time="2025-01-29T16:19:17.319925961Z" level=info msg="CreateContainer within sandbox \"2da9354e2cd43a4c85d6b00f55ec9b02f0938efe3d8e6be2a1319636bdfb3901\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3d1895bc05f81e173e96e7e24dfb9bce9eb813f5012a9db6689a698c464dc2b7\"" Jan 29 16:19:17.321922 containerd[1485]: time="2025-01-29T16:19:17.320756816Z" level=info msg="StartContainer for \"3d1895bc05f81e173e96e7e24dfb9bce9eb813f5012a9db6689a698c464dc2b7\"" Jan 29 16:19:17.352744 systemd[1]: Started cri-containerd-3d1895bc05f81e173e96e7e24dfb9bce9eb813f5012a9db6689a698c464dc2b7.scope - libcontainer container 3d1895bc05f81e173e96e7e24dfb9bce9eb813f5012a9db6689a698c464dc2b7. Jan 29 16:19:17.401871 containerd[1485]: time="2025-01-29T16:19:17.401019322Z" level=info msg="StartContainer for \"3d1895bc05f81e173e96e7e24dfb9bce9eb813f5012a9db6689a698c464dc2b7\" returns successfully" Jan 29 16:19:18.884870 kubelet[2746]: I0129 16:19:18.883692 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lm269" podStartSLOduration=3.88365722 podStartE2EDuration="3.88365722s" podCreationTimestamp="2025-01-29 16:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:19:17.954245742 +0000 UTC m=+19.298646117" watchObservedRunningTime="2025-01-29 16:19:18.88365722 +0000 UTC m=+20.228057624" Jan 29 16:19:21.681542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947092116.mount: Deactivated successfully. Jan 29 16:19:22.867378 containerd[1485]: time="2025-01-29T16:19:22.867314690Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:19:22.868799 containerd[1485]: time="2025-01-29T16:19:22.868764035Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:19:22.870221 containerd[1485]: time="2025-01-29T16:19:22.870199883Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:19:22.872986 containerd[1485]: time="2025-01-29T16:19:22.872843637Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.001750646s" Jan 29 16:19:22.872986 containerd[1485]: time="2025-01-29T16:19:22.872878665Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:19:22.875147 containerd[1485]: time="2025-01-29T16:19:22.875106147Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:19:22.876697 containerd[1485]: time="2025-01-29T16:19:22.876530494Z" level=info msg="CreateContainer within sandbox \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:19:22.909448 containerd[1485]: time="2025-01-29T16:19:22.909407081Z" level=info msg="CreateContainer within sandbox \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\"" Jan 29 16:19:22.910290 containerd[1485]: time="2025-01-29T16:19:22.910267808Z" level=info msg="StartContainer for \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\"" Jan 29 16:19:22.957215 systemd[1]: Started cri-containerd-ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263.scope - libcontainer container ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263. Jan 29 16:19:22.992410 containerd[1485]: time="2025-01-29T16:19:22.992299725Z" level=info msg="StartContainer for \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\" returns successfully" Jan 29 16:19:23.987647 kubelet[2746]: I0129 16:19:23.987469 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-zwv62" podStartSLOduration=2.982002018 podStartE2EDuration="8.987426987s" podCreationTimestamp="2025-01-29 16:19:15 +0000 UTC" firstStartedPulling="2025-01-29 16:19:16.868746656 +0000 UTC m=+18.213147071" lastFinishedPulling="2025-01-29 16:19:22.874171666 +0000 UTC m=+24.218572040" observedRunningTime="2025-01-29 16:19:23.98700244 +0000 UTC m=+25.331402824" watchObservedRunningTime="2025-01-29 16:19:23.987426987 +0000 UTC m=+25.331827371" Jan 29 16:19:28.816412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746481175.mount: Deactivated successfully. Jan 29 16:19:31.828168 containerd[1485]: time="2025-01-29T16:19:31.827958356Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:19:31.830593 containerd[1485]: time="2025-01-29T16:19:31.829977324Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:19:31.832060 containerd[1485]: time="2025-01-29T16:19:31.831997515Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:19:31.839032 containerd[1485]: time="2025-01-29T16:19:31.838771060Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.963600628s" Jan 29 16:19:31.839032 containerd[1485]: time="2025-01-29T16:19:31.838849753Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:19:31.843646 containerd[1485]: time="2025-01-29T16:19:31.843427227Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:19:31.881172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692757987.mount: Deactivated successfully. Jan 29 16:19:31.891054 containerd[1485]: time="2025-01-29T16:19:31.890997485Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\"" Jan 29 16:19:31.893814 containerd[1485]: time="2025-01-29T16:19:31.892226651Z" level=info msg="StartContainer for \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\"" Jan 29 16:19:31.949541 systemd[1]: Started cri-containerd-5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b.scope - libcontainer container 5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b. Jan 29 16:19:31.998594 containerd[1485]: time="2025-01-29T16:19:31.998390402Z" level=info msg="StartContainer for \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\" returns successfully" Jan 29 16:19:32.012012 systemd[1]: cri-containerd-5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b.scope: Deactivated successfully. Jan 29 16:19:32.870174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b-rootfs.mount: Deactivated successfully. Jan 29 16:19:33.234124 containerd[1485]: time="2025-01-29T16:19:33.233969973Z" level=info msg="shim disconnected" id=5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b namespace=k8s.io Jan 29 16:19:33.235733 containerd[1485]: time="2025-01-29T16:19:33.234061510Z" level=warning msg="cleaning up after shim disconnected" id=5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b namespace=k8s.io Jan 29 16:19:33.235733 containerd[1485]: time="2025-01-29T16:19:33.234472257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:19:33.268383 containerd[1485]: time="2025-01-29T16:19:33.267306824Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:19:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:19:34.008855 containerd[1485]: time="2025-01-29T16:19:34.008582832Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:19:34.037528 containerd[1485]: time="2025-01-29T16:19:34.037391035Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\"" Jan 29 16:19:34.038565 containerd[1485]: time="2025-01-29T16:19:34.038448014Z" level=info msg="StartContainer for \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\"" Jan 29 16:19:34.098583 systemd[1]: Started cri-containerd-4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c.scope - libcontainer container 4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c. Jan 29 16:19:34.140489 containerd[1485]: time="2025-01-29T16:19:34.140021601Z" level=info msg="StartContainer for \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\" returns successfully" Jan 29 16:19:34.151651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:19:34.151977 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:19:34.152554 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:19:34.161849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:19:34.167564 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:19:34.168227 systemd[1]: cri-containerd-4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c.scope: Deactivated successfully. Jan 29 16:19:34.180458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:19:34.204544 containerd[1485]: time="2025-01-29T16:19:34.204314640Z" level=info msg="shim disconnected" id=4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c namespace=k8s.io Jan 29 16:19:34.204544 containerd[1485]: time="2025-01-29T16:19:34.204459851Z" level=warning msg="cleaning up after shim disconnected" id=4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c namespace=k8s.io Jan 29 16:19:34.204544 containerd[1485]: time="2025-01-29T16:19:34.204470552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:19:35.010229 containerd[1485]: time="2025-01-29T16:19:35.010132201Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:19:35.028425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c-rootfs.mount: Deactivated successfully. Jan 29 16:19:35.063868 containerd[1485]: time="2025-01-29T16:19:35.063733631Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\"" Jan 29 16:19:35.064576 containerd[1485]: time="2025-01-29T16:19:35.064536037Z" level=info msg="StartContainer for \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\"" Jan 29 16:19:35.110771 systemd[1]: Started cri-containerd-50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826.scope - libcontainer container 50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826. Jan 29 16:19:35.165555 systemd[1]: cri-containerd-50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826.scope: Deactivated successfully. Jan 29 16:19:35.179166 containerd[1485]: time="2025-01-29T16:19:35.178957748Z" level=info msg="StartContainer for \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\" returns successfully" Jan 29 16:19:35.214490 containerd[1485]: time="2025-01-29T16:19:35.214220444Z" level=info msg="shim disconnected" id=50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826 namespace=k8s.io Jan 29 16:19:35.214490 containerd[1485]: time="2025-01-29T16:19:35.214306300Z" level=warning msg="cleaning up after shim disconnected" id=50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826 namespace=k8s.io Jan 29 16:19:35.214490 containerd[1485]: time="2025-01-29T16:19:35.214317482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:19:36.017563 containerd[1485]: time="2025-01-29T16:19:36.017379555Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:19:36.027030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826-rootfs.mount: Deactivated successfully. Jan 29 16:19:36.054785 containerd[1485]: time="2025-01-29T16:19:36.054624414Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\"" Jan 29 16:19:36.057507 containerd[1485]: time="2025-01-29T16:19:36.057430049Z" level=info msg="StartContainer for \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\"" Jan 29 16:19:36.115644 systemd[1]: Started cri-containerd-368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e.scope - libcontainer container 368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e. Jan 29 16:19:36.145865 systemd[1]: cri-containerd-368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e.scope: Deactivated successfully. Jan 29 16:19:36.158829 containerd[1485]: time="2025-01-29T16:19:36.158707842Z" level=info msg="StartContainer for \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\" returns successfully" Jan 29 16:19:36.200756 containerd[1485]: time="2025-01-29T16:19:36.200653199Z" level=info msg="shim disconnected" id=368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e namespace=k8s.io Jan 29 16:19:36.201467 containerd[1485]: time="2025-01-29T16:19:36.201124421Z" level=warning msg="cleaning up after shim disconnected" id=368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e namespace=k8s.io Jan 29 16:19:36.201467 containerd[1485]: time="2025-01-29T16:19:36.201157786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:19:37.027904 containerd[1485]: time="2025-01-29T16:19:37.027821572Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:19:37.037757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e-rootfs.mount: Deactivated successfully. Jan 29 16:19:37.080502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243338399.mount: Deactivated successfully. Jan 29 16:19:37.101772 containerd[1485]: time="2025-01-29T16:19:37.101660721Z" level=info msg="CreateContainer within sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\"" Jan 29 16:19:37.103000 containerd[1485]: time="2025-01-29T16:19:37.102943897Z" level=info msg="StartContainer for \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\"" Jan 29 16:19:37.163157 systemd[1]: Started cri-containerd-d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f.scope - libcontainer container d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f. Jan 29 16:19:37.206021 containerd[1485]: time="2025-01-29T16:19:37.205973282Z" level=info msg="StartContainer for \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\" returns successfully" Jan 29 16:19:37.370476 kubelet[2746]: I0129 16:19:37.370092 2746 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:19:37.408552 kubelet[2746]: I0129 16:19:37.408495 2746 topology_manager.go:215] "Topology Admit Handler" podUID="45e58722-ec14-4076-b363-dd3b8caea2a3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q2nn9" Jan 29 16:19:37.412296 kubelet[2746]: I0129 16:19:37.411682 2746 topology_manager.go:215] "Topology Admit Handler" podUID="f0fd4f61-99ca-42d8-81ac-1fda46e04ba8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jdr8z" Jan 29 16:19:37.424114 systemd[1]: Created slice kubepods-burstable-pod45e58722_ec14_4076_b363_dd3b8caea2a3.slice - libcontainer container kubepods-burstable-pod45e58722_ec14_4076_b363_dd3b8caea2a3.slice. Jan 29 16:19:37.439583 systemd[1]: Created slice kubepods-burstable-podf0fd4f61_99ca_42d8_81ac_1fda46e04ba8.slice - libcontainer container kubepods-burstable-podf0fd4f61_99ca_42d8_81ac_1fda46e04ba8.slice. Jan 29 16:19:37.443239 kubelet[2746]: I0129 16:19:37.442816 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e58722-ec14-4076-b363-dd3b8caea2a3-config-volume\") pod \"coredns-7db6d8ff4d-q2nn9\" (UID: \"45e58722-ec14-4076-b363-dd3b8caea2a3\") " pod="kube-system/coredns-7db6d8ff4d-q2nn9" Jan 29 16:19:37.443239 kubelet[2746]: I0129 16:19:37.442866 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhgxf\" (UniqueName: \"kubernetes.io/projected/45e58722-ec14-4076-b363-dd3b8caea2a3-kube-api-access-xhgxf\") pod \"coredns-7db6d8ff4d-q2nn9\" (UID: \"45e58722-ec14-4076-b363-dd3b8caea2a3\") " pod="kube-system/coredns-7db6d8ff4d-q2nn9" Jan 29 16:19:37.543783 kubelet[2746]: I0129 16:19:37.543720 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dlmh\" (UniqueName: \"kubernetes.io/projected/f0fd4f61-99ca-42d8-81ac-1fda46e04ba8-kube-api-access-6dlmh\") pod \"coredns-7db6d8ff4d-jdr8z\" (UID: \"f0fd4f61-99ca-42d8-81ac-1fda46e04ba8\") " pod="kube-system/coredns-7db6d8ff4d-jdr8z" Jan 29 16:19:37.543941 kubelet[2746]: I0129 16:19:37.543902 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0fd4f61-99ca-42d8-81ac-1fda46e04ba8-config-volume\") pod \"coredns-7db6d8ff4d-jdr8z\" (UID: \"f0fd4f61-99ca-42d8-81ac-1fda46e04ba8\") " pod="kube-system/coredns-7db6d8ff4d-jdr8z" Jan 29 16:19:37.731484 containerd[1485]: time="2025-01-29T16:19:37.731131919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2nn9,Uid:45e58722-ec14-4076-b363-dd3b8caea2a3,Namespace:kube-system,Attempt:0,}" Jan 29 16:19:37.745080 containerd[1485]: time="2025-01-29T16:19:37.744723593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jdr8z,Uid:f0fd4f61-99ca-42d8-81ac-1fda46e04ba8,Namespace:kube-system,Attempt:0,}" Jan 29 16:19:38.058522 systemd[1]: run-containerd-runc-k8s.io-d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f-runc.bFHMTE.mount: Deactivated successfully. Jan 29 16:19:39.463227 systemd-networkd[1384]: cilium_host: Link UP Jan 29 16:19:39.463791 systemd-networkd[1384]: cilium_net: Link UP Jan 29 16:19:39.465825 systemd-networkd[1384]: cilium_net: Gained carrier Jan 29 16:19:39.466150 systemd-networkd[1384]: cilium_host: Gained carrier Jan 29 16:19:39.559546 systemd-networkd[1384]: cilium_vxlan: Link UP Jan 29 16:19:39.559555 systemd-networkd[1384]: cilium_vxlan: Gained carrier Jan 29 16:19:39.781526 systemd-networkd[1384]: cilium_host: Gained IPv6LL Jan 29 16:19:39.873698 kernel: NET: Registered PF_ALG protocol family Jan 29 16:19:40.388598 systemd-networkd[1384]: cilium_net: Gained IPv6LL Jan 29 16:19:40.703783 systemd-networkd[1384]: lxc_health: Link UP Jan 29 16:19:40.717181 systemd-networkd[1384]: lxc_health: Gained carrier Jan 29 16:19:40.900478 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Jan 29 16:19:41.171696 kubelet[2746]: I0129 16:19:41.170777 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-24qp7" podStartSLOduration=11.613562274 podStartE2EDuration="26.170759812s" podCreationTimestamp="2025-01-29 16:19:15 +0000 UTC" firstStartedPulling="2025-01-29 16:19:17.28275231 +0000 UTC m=+18.627152684" lastFinishedPulling="2025-01-29 16:19:31.839949807 +0000 UTC m=+33.184350222" observedRunningTime="2025-01-29 16:19:38.061678736 +0000 UTC m=+39.406079100" watchObservedRunningTime="2025-01-29 16:19:41.170759812 +0000 UTC m=+42.515160186" Jan 29 16:19:41.341804 systemd-networkd[1384]: lxcc6b7625f1970: Link UP Jan 29 16:19:41.356370 kernel: eth0: renamed from tmp2530f Jan 29 16:19:41.365387 kernel: eth0: renamed from tmpb75c0 Jan 29 16:19:41.363764 systemd-networkd[1384]: lxcf94bf1fdb26c: Link UP Jan 29 16:19:41.378648 systemd-networkd[1384]: lxcc6b7625f1970: Gained carrier Jan 29 16:19:41.380038 systemd-networkd[1384]: lxcf94bf1fdb26c: Gained carrier Jan 29 16:19:42.628586 systemd-networkd[1384]: lxc_health: Gained IPv6LL Jan 29 16:19:43.204631 systemd-networkd[1384]: lxcf94bf1fdb26c: Gained IPv6LL Jan 29 16:19:43.268688 systemd-networkd[1384]: lxcc6b7625f1970: Gained IPv6LL Jan 29 16:19:45.960346 containerd[1485]: time="2025-01-29T16:19:45.958597945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:19:45.960346 containerd[1485]: time="2025-01-29T16:19:45.959134424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:19:45.960346 containerd[1485]: time="2025-01-29T16:19:45.959161304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:45.960346 containerd[1485]: time="2025-01-29T16:19:45.959256530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:45.995073 containerd[1485]: time="2025-01-29T16:19:45.993755063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:19:45.995073 containerd[1485]: time="2025-01-29T16:19:45.993820794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:19:45.995073 containerd[1485]: time="2025-01-29T16:19:45.993836212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:45.995073 containerd[1485]: time="2025-01-29T16:19:45.993909828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:19:46.001676 systemd[1]: run-containerd-runc-k8s.io-b75c0886cc2c3469d9cf6bd3befbdbb3b964d6310942db64b6967ac7ee714a5a-runc.Y65BnC.mount: Deactivated successfully. Jan 29 16:19:46.016580 systemd[1]: Started cri-containerd-b75c0886cc2c3469d9cf6bd3befbdbb3b964d6310942db64b6967ac7ee714a5a.scope - libcontainer container b75c0886cc2c3469d9cf6bd3befbdbb3b964d6310942db64b6967ac7ee714a5a. Jan 29 16:19:46.023449 systemd[1]: Started cri-containerd-2530fb9e18bd7d02533a6d02a4a2fa73f33c00b06e04427d9f270f6a3e6924e5.scope - libcontainer container 2530fb9e18bd7d02533a6d02a4a2fa73f33c00b06e04427d9f270f6a3e6924e5. Jan 29 16:19:46.102190 containerd[1485]: time="2025-01-29T16:19:46.102133061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jdr8z,Uid:f0fd4f61-99ca-42d8-81ac-1fda46e04ba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b75c0886cc2c3469d9cf6bd3befbdbb3b964d6310942db64b6967ac7ee714a5a\"" Jan 29 16:19:46.105974 containerd[1485]: time="2025-01-29T16:19:46.105152150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2nn9,Uid:45e58722-ec14-4076-b363-dd3b8caea2a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2530fb9e18bd7d02533a6d02a4a2fa73f33c00b06e04427d9f270f6a3e6924e5\"" Jan 29 16:19:46.109360 containerd[1485]: time="2025-01-29T16:19:46.109088072Z" level=info msg="CreateContainer within sandbox \"b75c0886cc2c3469d9cf6bd3befbdbb3b964d6310942db64b6967ac7ee714a5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:19:46.111674 containerd[1485]: time="2025-01-29T16:19:46.111303425Z" level=info msg="CreateContainer within sandbox \"2530fb9e18bd7d02533a6d02a4a2fa73f33c00b06e04427d9f270f6a3e6924e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:19:46.143253 containerd[1485]: time="2025-01-29T16:19:46.143208945Z" level=info msg="CreateContainer within sandbox \"b75c0886cc2c3469d9cf6bd3befbdbb3b964d6310942db64b6967ac7ee714a5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e605928762dfc9087baa83578b0fd1d03b9844a103975d370c7b6b63a7944d0\"" Jan 29 16:19:46.144364 containerd[1485]: time="2025-01-29T16:19:46.143999296Z" level=info msg="StartContainer for \"5e605928762dfc9087baa83578b0fd1d03b9844a103975d370c7b6b63a7944d0\"" Jan 29 16:19:46.151496 containerd[1485]: time="2025-01-29T16:19:46.151389400Z" level=info msg="CreateContainer within sandbox \"2530fb9e18bd7d02533a6d02a4a2fa73f33c00b06e04427d9f270f6a3e6924e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f962aee1517824b6a6cdaa4d1228166c19876270da5e2c902de71f11d60a78e5\"" Jan 29 16:19:46.158347 containerd[1485]: time="2025-01-29T16:19:46.157919746Z" level=info msg="StartContainer for \"f962aee1517824b6a6cdaa4d1228166c19876270da5e2c902de71f11d60a78e5\"" Jan 29 16:19:46.209506 systemd[1]: Started cri-containerd-5e605928762dfc9087baa83578b0fd1d03b9844a103975d370c7b6b63a7944d0.scope - libcontainer container 5e605928762dfc9087baa83578b0fd1d03b9844a103975d370c7b6b63a7944d0. Jan 29 16:19:46.213775 systemd[1]: Started cri-containerd-f962aee1517824b6a6cdaa4d1228166c19876270da5e2c902de71f11d60a78e5.scope - libcontainer container f962aee1517824b6a6cdaa4d1228166c19876270da5e2c902de71f11d60a78e5. Jan 29 16:19:46.254080 containerd[1485]: time="2025-01-29T16:19:46.254034283Z" level=info msg="StartContainer for \"5e605928762dfc9087baa83578b0fd1d03b9844a103975d370c7b6b63a7944d0\" returns successfully" Jan 29 16:19:46.263770 containerd[1485]: time="2025-01-29T16:19:46.263593465Z" level=info msg="StartContainer for \"f962aee1517824b6a6cdaa4d1228166c19876270da5e2c902de71f11d60a78e5\" returns successfully" Jan 29 16:19:47.099987 kubelet[2746]: I0129 16:19:47.099848 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jdr8z" podStartSLOduration=32.09981601 podStartE2EDuration="32.09981601s" podCreationTimestamp="2025-01-29 16:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:19:47.09397217 +0000 UTC m=+48.438372584" watchObservedRunningTime="2025-01-29 16:19:47.09981601 +0000 UTC m=+48.444216424" Jan 29 16:19:47.132398 kubelet[2746]: I0129 16:19:47.131942 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-q2nn9" podStartSLOduration=32.131858204 podStartE2EDuration="32.131858204s" podCreationTimestamp="2025-01-29 16:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:19:47.131756936 +0000 UTC m=+48.476157350" watchObservedRunningTime="2025-01-29 16:19:47.131858204 +0000 UTC m=+48.476258628" Jan 29 16:20:28.390538 systemd[1]: Started sshd@7-172.24.4.227:22-172.24.4.1:53162.service - OpenSSH per-connection server daemon (172.24.4.1:53162). Jan 29 16:20:29.839070 sshd[4112]: Accepted publickey for core from 172.24.4.1 port 53162 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:20:29.842199 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:29.854456 systemd-logind[1468]: New session 10 of user core. Jan 29 16:20:29.859676 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:20:30.647869 sshd[4114]: Connection closed by 172.24.4.1 port 53162 Jan 29 16:20:30.648452 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:30.655967 systemd[1]: sshd@7-172.24.4.227:22-172.24.4.1:53162.service: Deactivated successfully. Jan 29 16:20:30.660484 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:20:30.662624 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:20:30.665028 systemd-logind[1468]: Removed session 10. Jan 29 16:20:35.678973 systemd[1]: Started sshd@8-172.24.4.227:22-172.24.4.1:37140.service - OpenSSH per-connection server daemon (172.24.4.1:37140). Jan 29 16:20:36.995289 sshd[4126]: Accepted publickey for core from 172.24.4.1 port 37140 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:20:36.998438 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:37.010438 systemd-logind[1468]: New session 11 of user core. Jan 29 16:20:37.016021 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:20:37.791448 sshd[4128]: Connection closed by 172.24.4.1 port 37140 Jan 29 16:20:37.792473 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:37.799186 systemd[1]: sshd@8-172.24.4.227:22-172.24.4.1:37140.service: Deactivated successfully. Jan 29 16:20:37.805899 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:20:37.810002 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:20:37.812883 systemd-logind[1468]: Removed session 11. Jan 29 16:20:42.823034 systemd[1]: Started sshd@9-172.24.4.227:22-172.24.4.1:37148.service - OpenSSH per-connection server daemon (172.24.4.1:37148). Jan 29 16:20:44.031403 sshd[4140]: Accepted publickey for core from 172.24.4.1 port 37148 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:20:44.035109 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:44.049451 systemd-logind[1468]: New session 12 of user core. Jan 29 16:20:44.062720 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:20:44.867205 sshd[4142]: Connection closed by 172.24.4.1 port 37148 Jan 29 16:20:44.868195 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:44.874716 systemd[1]: sshd@9-172.24.4.227:22-172.24.4.1:37148.service: Deactivated successfully. Jan 29 16:20:44.879885 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:20:44.882473 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:20:44.885776 systemd-logind[1468]: Removed session 12. Jan 29 16:20:49.898082 systemd[1]: Started sshd@10-172.24.4.227:22-172.24.4.1:58818.service - OpenSSH per-connection server daemon (172.24.4.1:58818). Jan 29 16:20:51.940627 sshd[4156]: Accepted publickey for core from 172.24.4.1 port 58818 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:20:51.943481 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:51.954475 systemd-logind[1468]: New session 13 of user core. Jan 29 16:20:51.961034 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:20:52.829108 sshd[4158]: Connection closed by 172.24.4.1 port 58818 Jan 29 16:20:52.830209 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:52.835831 systemd[1]: sshd@10-172.24.4.227:22-172.24.4.1:58818.service: Deactivated successfully. Jan 29 16:20:52.837887 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:20:52.838937 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:20:52.840226 systemd-logind[1468]: Removed session 13. Jan 29 16:20:57.859925 systemd[1]: Started sshd@11-172.24.4.227:22-172.24.4.1:58744.service - OpenSSH per-connection server daemon (172.24.4.1:58744). Jan 29 16:20:59.160319 sshd[4170]: Accepted publickey for core from 172.24.4.1 port 58744 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:20:59.163320 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:59.175476 systemd-logind[1468]: New session 14 of user core. Jan 29 16:20:59.182645 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:21:00.129982 sshd[4174]: Connection closed by 172.24.4.1 port 58744 Jan 29 16:21:00.131694 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:00.148820 systemd[1]: sshd@11-172.24.4.227:22-172.24.4.1:58744.service: Deactivated successfully. Jan 29 16:21:00.153063 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:21:00.157395 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:21:00.164047 systemd[1]: Started sshd@12-172.24.4.227:22-172.24.4.1:58752.service - OpenSSH per-connection server daemon (172.24.4.1:58752). Jan 29 16:21:00.167432 systemd-logind[1468]: Removed session 14. Jan 29 16:21:01.336435 sshd[4185]: Accepted publickey for core from 172.24.4.1 port 58752 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:01.339212 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:01.351221 systemd-logind[1468]: New session 15 of user core. Jan 29 16:21:01.359644 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:21:02.206292 sshd[4188]: Connection closed by 172.24.4.1 port 58752 Jan 29 16:21:02.207712 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:02.222457 systemd[1]: sshd@12-172.24.4.227:22-172.24.4.1:58752.service: Deactivated successfully. Jan 29 16:21:02.226696 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:21:02.229104 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:21:02.236632 systemd[1]: Started sshd@13-172.24.4.227:22-172.24.4.1:58766.service - OpenSSH per-connection server daemon (172.24.4.1:58766). Jan 29 16:21:02.238186 systemd-logind[1468]: Removed session 15. Jan 29 16:21:03.453210 sshd[4197]: Accepted publickey for core from 172.24.4.1 port 58766 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:03.456089 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:03.470682 systemd-logind[1468]: New session 16 of user core. Jan 29 16:21:03.480675 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:21:04.211837 sshd[4200]: Connection closed by 172.24.4.1 port 58766 Jan 29 16:21:04.213059 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:04.219729 systemd[1]: sshd@13-172.24.4.227:22-172.24.4.1:58766.service: Deactivated successfully. Jan 29 16:21:04.225150 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:21:04.228791 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:21:04.230918 systemd-logind[1468]: Removed session 16. Jan 29 16:21:09.242896 systemd[1]: Started sshd@14-172.24.4.227:22-172.24.4.1:34122.service - OpenSSH per-connection server daemon (172.24.4.1:34122). Jan 29 16:21:10.839486 sshd[4212]: Accepted publickey for core from 172.24.4.1 port 34122 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:10.842442 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:10.855657 systemd-logind[1468]: New session 17 of user core. Jan 29 16:21:10.861006 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:21:11.623086 sshd[4214]: Connection closed by 172.24.4.1 port 34122 Jan 29 16:21:11.624207 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:11.629665 systemd[1]: sshd@14-172.24.4.227:22-172.24.4.1:34122.service: Deactivated successfully. Jan 29 16:21:11.633883 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:21:11.637132 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:21:11.639799 systemd-logind[1468]: Removed session 17. Jan 29 16:21:16.649934 systemd[1]: Started sshd@15-172.24.4.227:22-172.24.4.1:49472.service - OpenSSH per-connection server daemon (172.24.4.1:49472). Jan 29 16:21:18.235259 sshd[4226]: Accepted publickey for core from 172.24.4.1 port 49472 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:18.237377 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:18.245531 systemd-logind[1468]: New session 18 of user core. Jan 29 16:21:18.248819 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:21:19.080641 sshd[4230]: Connection closed by 172.24.4.1 port 49472 Jan 29 16:21:19.083201 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:19.099795 systemd[1]: sshd@15-172.24.4.227:22-172.24.4.1:49472.service: Deactivated successfully. Jan 29 16:21:19.103283 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:21:19.105662 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:21:19.117052 systemd[1]: Started sshd@16-172.24.4.227:22-172.24.4.1:49476.service - OpenSSH per-connection server daemon (172.24.4.1:49476). Jan 29 16:21:19.120520 systemd-logind[1468]: Removed session 18. Jan 29 16:21:20.653418 sshd[4240]: Accepted publickey for core from 172.24.4.1 port 49476 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:20.656267 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:20.667660 systemd-logind[1468]: New session 19 of user core. Jan 29 16:21:20.675660 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:21:21.446715 sshd[4243]: Connection closed by 172.24.4.1 port 49476 Jan 29 16:21:21.446452 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:21.467591 systemd[1]: sshd@16-172.24.4.227:22-172.24.4.1:49476.service: Deactivated successfully. Jan 29 16:21:21.471714 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:21:21.474465 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:21:21.481945 systemd[1]: Started sshd@17-172.24.4.227:22-172.24.4.1:49482.service - OpenSSH per-connection server daemon (172.24.4.1:49482). Jan 29 16:21:21.485109 systemd-logind[1468]: Removed session 19. Jan 29 16:21:22.506270 sshd[4251]: Accepted publickey for core from 172.24.4.1 port 49482 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:22.508448 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:22.523234 systemd-logind[1468]: New session 20 of user core. Jan 29 16:21:22.528744 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:21:25.033505 sshd[4254]: Connection closed by 172.24.4.1 port 49482 Jan 29 16:21:25.035173 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:25.049836 systemd[1]: sshd@17-172.24.4.227:22-172.24.4.1:49482.service: Deactivated successfully. Jan 29 16:21:25.055249 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:21:25.058034 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:21:25.068686 systemd[1]: Started sshd@18-172.24.4.227:22-172.24.4.1:39858.service - OpenSSH per-connection server daemon (172.24.4.1:39858). Jan 29 16:21:25.074755 systemd-logind[1468]: Removed session 20. Jan 29 16:21:26.417609 sshd[4269]: Accepted publickey for core from 172.24.4.1 port 39858 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:26.419943 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:26.431467 systemd-logind[1468]: New session 21 of user core. Jan 29 16:21:26.446876 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:21:27.499962 sshd[4272]: Connection closed by 172.24.4.1 port 39858 Jan 29 16:21:27.502868 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:27.529802 systemd[1]: sshd@18-172.24.4.227:22-172.24.4.1:39858.service: Deactivated successfully. Jan 29 16:21:27.535977 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:21:27.538264 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:21:27.551109 systemd[1]: Started sshd@19-172.24.4.227:22-172.24.4.1:39874.service - OpenSSH per-connection server daemon (172.24.4.1:39874). Jan 29 16:21:27.558154 systemd-logind[1468]: Removed session 21. Jan 29 16:21:28.735429 sshd[4281]: Accepted publickey for core from 172.24.4.1 port 39874 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:28.738257 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:28.749528 systemd-logind[1468]: New session 22 of user core. Jan 29 16:21:28.759667 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:21:29.585449 sshd[4284]: Connection closed by 172.24.4.1 port 39874 Jan 29 16:21:29.586154 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:29.592281 systemd[1]: sshd@19-172.24.4.227:22-172.24.4.1:39874.service: Deactivated successfully. Jan 29 16:21:29.597315 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:21:29.601608 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:21:29.604396 systemd-logind[1468]: Removed session 22. Jan 29 16:21:34.613906 systemd[1]: Started sshd@20-172.24.4.227:22-172.24.4.1:34318.service - OpenSSH per-connection server daemon (172.24.4.1:34318). Jan 29 16:21:36.013006 sshd[4298]: Accepted publickey for core from 172.24.4.1 port 34318 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:36.016051 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:36.028470 systemd-logind[1468]: New session 23 of user core. Jan 29 16:21:36.035633 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:21:36.953501 sshd[4300]: Connection closed by 172.24.4.1 port 34318 Jan 29 16:21:36.956610 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:36.973180 systemd[1]: sshd@20-172.24.4.227:22-172.24.4.1:34318.service: Deactivated successfully. Jan 29 16:21:36.986408 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:21:36.991413 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:21:36.993792 systemd-logind[1468]: Removed session 23. Jan 29 16:21:41.980936 systemd[1]: Started sshd@21-172.24.4.227:22-172.24.4.1:34330.service - OpenSSH per-connection server daemon (172.24.4.1:34330). Jan 29 16:21:43.373668 sshd[4313]: Accepted publickey for core from 172.24.4.1 port 34330 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:43.376259 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:43.388440 systemd-logind[1468]: New session 24 of user core. Jan 29 16:21:43.397648 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:21:44.129698 sshd[4315]: Connection closed by 172.24.4.1 port 34330 Jan 29 16:21:44.130595 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:44.145619 systemd[1]: sshd@21-172.24.4.227:22-172.24.4.1:34330.service: Deactivated successfully. Jan 29 16:21:44.148862 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:21:44.151018 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:21:44.156947 systemd[1]: Started sshd@22-172.24.4.227:22-172.24.4.1:47576.service - OpenSSH per-connection server daemon (172.24.4.1:47576). Jan 29 16:21:44.159543 systemd-logind[1468]: Removed session 24. Jan 29 16:21:46.060006 sshd[4326]: Accepted publickey for core from 172.24.4.1 port 47576 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:46.063740 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:46.078460 systemd-logind[1468]: New session 25 of user core. Jan 29 16:21:46.086705 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:21:48.194164 containerd[1485]: time="2025-01-29T16:21:48.192794427Z" level=info msg="StopContainer for \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\" with timeout 30 (s)" Jan 29 16:21:48.195240 containerd[1485]: time="2025-01-29T16:21:48.195149603Z" level=info msg="Stop container \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\" with signal terminated" Jan 29 16:21:48.216721 systemd[1]: cri-containerd-ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263.scope: Deactivated successfully. Jan 29 16:21:48.217893 containerd[1485]: time="2025-01-29T16:21:48.217713509Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:21:48.226147 containerd[1485]: time="2025-01-29T16:21:48.226030146Z" level=info msg="StopContainer for \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\" with timeout 2 (s)" Jan 29 16:21:48.226551 containerd[1485]: time="2025-01-29T16:21:48.226533085Z" level=info msg="Stop container \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\" with signal terminated" Jan 29 16:21:48.234901 systemd-networkd[1384]: lxc_health: Link DOWN Jan 29 16:21:48.235796 systemd-networkd[1384]: lxc_health: Lost carrier Jan 29 16:21:48.252652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263-rootfs.mount: Deactivated successfully. Jan 29 16:21:48.253542 systemd[1]: cri-containerd-d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f.scope: Deactivated successfully. Jan 29 16:21:48.254563 systemd[1]: cri-containerd-d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f.scope: Consumed 8.696s CPU time, 124.6M memory peak, 152K read from disk, 13.3M written to disk. Jan 29 16:21:48.277241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f-rootfs.mount: Deactivated successfully. Jan 29 16:21:48.281762 containerd[1485]: time="2025-01-29T16:21:48.281691709Z" level=info msg="shim disconnected" id=d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f namespace=k8s.io Jan 29 16:21:48.281762 containerd[1485]: time="2025-01-29T16:21:48.281746328Z" level=warning msg="cleaning up after shim disconnected" id=d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f namespace=k8s.io Jan 29 16:21:48.281762 containerd[1485]: time="2025-01-29T16:21:48.281756277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:48.281926 containerd[1485]: time="2025-01-29T16:21:48.281860046Z" level=info msg="shim disconnected" id=ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263 namespace=k8s.io Jan 29 16:21:48.281926 containerd[1485]: time="2025-01-29T16:21:48.281887596Z" level=warning msg="cleaning up after shim disconnected" id=ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263 namespace=k8s.io Jan 29 16:21:48.281926 containerd[1485]: time="2025-01-29T16:21:48.281898326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:48.307556 containerd[1485]: time="2025-01-29T16:21:48.307376090Z" level=info msg="StopContainer for \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\" returns successfully" Jan 29 16:21:48.308096 containerd[1485]: time="2025-01-29T16:21:48.308071071Z" level=info msg="StopPodSandbox for \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\"" Jan 29 16:21:48.308608 containerd[1485]: time="2025-01-29T16:21:48.308564443Z" level=info msg="Container to stop \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:21:48.308794 containerd[1485]: time="2025-01-29T16:21:48.308775559Z" level=info msg="Container to stop \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:21:48.308948 containerd[1485]: time="2025-01-29T16:21:48.308874910Z" level=info msg="Container to stop \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:21:48.308948 containerd[1485]: time="2025-01-29T16:21:48.308890469Z" level=info msg="Container to stop \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:21:48.308948 containerd[1485]: time="2025-01-29T16:21:48.308902942Z" level=info msg="Container to stop \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:21:48.309231 containerd[1485]: time="2025-01-29T16:21:48.308530922Z" level=info msg="StopContainer for \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\" returns successfully" Jan 29 16:21:48.309508 containerd[1485]: time="2025-01-29T16:21:48.309387037Z" level=info msg="StopPodSandbox for \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\"" Jan 29 16:21:48.309508 containerd[1485]: time="2025-01-29T16:21:48.309415219Z" level=info msg="Container to stop \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:21:48.313057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a-shm.mount: Deactivated successfully. Jan 29 16:21:48.313191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4-shm.mount: Deactivated successfully. Jan 29 16:21:48.320197 systemd[1]: cri-containerd-fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a.scope: Deactivated successfully. Jan 29 16:21:48.328842 systemd[1]: cri-containerd-aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4.scope: Deactivated successfully. Jan 29 16:21:48.350737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a-rootfs.mount: Deactivated successfully. Jan 29 16:21:48.397273 containerd[1485]: time="2025-01-29T16:21:48.397194175Z" level=info msg="shim disconnected" id=fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a namespace=k8s.io Jan 29 16:21:48.397273 containerd[1485]: time="2025-01-29T16:21:48.397265976Z" level=warning msg="cleaning up after shim disconnected" id=fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a namespace=k8s.io Jan 29 16:21:48.397273 containerd[1485]: time="2025-01-29T16:21:48.397277838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:48.400896 containerd[1485]: time="2025-01-29T16:21:48.400695447Z" level=info msg="shim disconnected" id=aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4 namespace=k8s.io Jan 29 16:21:48.400896 containerd[1485]: time="2025-01-29T16:21:48.400742994Z" level=warning msg="cleaning up after shim disconnected" id=aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4 namespace=k8s.io Jan 29 16:21:48.400896 containerd[1485]: time="2025-01-29T16:21:48.400752271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:48.418990 containerd[1485]: time="2025-01-29T16:21:48.418846805Z" level=info msg="TearDown network for sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" successfully" Jan 29 16:21:48.418990 containerd[1485]: time="2025-01-29T16:21:48.418884925Z" level=info msg="StopPodSandbox for \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" returns successfully" Jan 29 16:21:48.421794 containerd[1485]: time="2025-01-29T16:21:48.421766323Z" level=info msg="TearDown network for sandbox \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\" successfully" Jan 29 16:21:48.421794 containerd[1485]: time="2025-01-29T16:21:48.421792281Z" level=info msg="StopPodSandbox for \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\" returns successfully" Jan 29 16:21:48.445864 kubelet[2746]: I0129 16:21:48.444742 2746 scope.go:117] "RemoveContainer" containerID="ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263" Jan 29 16:21:48.449520 containerd[1485]: time="2025-01-29T16:21:48.449188142Z" level=info msg="RemoveContainer for \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\"" Jan 29 16:21:48.470095 containerd[1485]: time="2025-01-29T16:21:48.469697973Z" level=info msg="RemoveContainer for \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\" returns successfully" Jan 29 16:21:48.470220 kubelet[2746]: I0129 16:21:48.469992 2746 scope.go:117] "RemoveContainer" containerID="ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263" Jan 29 16:21:48.471784 containerd[1485]: time="2025-01-29T16:21:48.471688412Z" level=error msg="ContainerStatus for \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\": not found" Jan 29 16:21:48.472137 kubelet[2746]: E0129 16:21:48.472062 2746 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\": not found" containerID="ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263" Jan 29 16:21:48.472308 kubelet[2746]: I0129 16:21:48.472159 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263"} err="failed to get container status \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad51dc8b17893977b4739957c16a5241d43cc68bd475ee673aac801e87dd4263\": not found" Jan 29 16:21:48.472381 kubelet[2746]: I0129 16:21:48.472353 2746 scope.go:117] "RemoveContainer" containerID="d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f" Jan 29 16:21:48.473889 containerd[1485]: time="2025-01-29T16:21:48.473607611Z" level=info msg="RemoveContainer for \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\"" Jan 29 16:21:48.491314 containerd[1485]: time="2025-01-29T16:21:48.491212450Z" level=info msg="RemoveContainer for \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\" returns successfully" Jan 29 16:21:48.491664 kubelet[2746]: I0129 16:21:48.491551 2746 scope.go:117] "RemoveContainer" containerID="368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e" Jan 29 16:21:48.494070 containerd[1485]: time="2025-01-29T16:21:48.494008412Z" level=info msg="RemoveContainer for \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\"" Jan 29 16:21:48.504580 containerd[1485]: time="2025-01-29T16:21:48.504486581Z" level=info msg="RemoveContainer for \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\" returns successfully" Jan 29 16:21:48.505111 kubelet[2746]: I0129 16:21:48.504877 2746 scope.go:117] "RemoveContainer" containerID="50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826" Jan 29 16:21:48.507274 containerd[1485]: time="2025-01-29T16:21:48.507129183Z" level=info msg="RemoveContainer for \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\"" Jan 29 16:21:48.510202 kubelet[2746]: I0129 16:21:48.509504 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-etc-cni-netd\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510202 kubelet[2746]: I0129 16:21:48.509541 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmt8d\" (UniqueName: \"kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-kube-api-access-bmt8d\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510202 kubelet[2746]: I0129 16:21:48.509565 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-host-proc-sys-kernel\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510202 kubelet[2746]: I0129 16:21:48.509583 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-cgroup\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510202 kubelet[2746]: I0129 16:21:48.509603 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-lib-modules\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510202 kubelet[2746]: I0129 16:21:48.509625 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw7rg\" (UniqueName: \"kubernetes.io/projected/e8956fda-f110-4260-aa68-ab3633f0b34b-kube-api-access-cw7rg\") pod \"e8956fda-f110-4260-aa68-ab3633f0b34b\" (UID: \"e8956fda-f110-4260-aa68-ab3633f0b34b\") " Jan 29 16:21:48.510439 kubelet[2746]: I0129 16:21:48.509644 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hostproc\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510439 kubelet[2746]: I0129 16:21:48.509665 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-clustermesh-secrets\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510439 kubelet[2746]: I0129 16:21:48.509684 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hubble-tls\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510439 kubelet[2746]: I0129 16:21:48.509706 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8956fda-f110-4260-aa68-ab3633f0b34b-cilium-config-path\") pod \"e8956fda-f110-4260-aa68-ab3633f0b34b\" (UID: \"e8956fda-f110-4260-aa68-ab3633f0b34b\") " Jan 29 16:21:48.510439 kubelet[2746]: I0129 16:21:48.509725 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-xtables-lock\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510439 kubelet[2746]: I0129 16:21:48.509760 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-config-path\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510614 kubelet[2746]: I0129 16:21:48.509778 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-run\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510614 kubelet[2746]: I0129 16:21:48.509795 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-host-proc-sys-net\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510614 kubelet[2746]: I0129 16:21:48.509817 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-bpf-maps\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510614 kubelet[2746]: I0129 16:21:48.509835 2746 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cni-path\") pod \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\" (UID: \"bd1631d5-6ffe-4c15-a229-d8bae18e25e8\") " Jan 29 16:21:48.510614 kubelet[2746]: I0129 16:21:48.509895 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cni-path" (OuterVolumeSpecName: "cni-path") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.510614 kubelet[2746]: I0129 16:21:48.509928 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.512589 kubelet[2746]: I0129 16:21:48.512571 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.512995 kubelet[2746]: I0129 16:21:48.512976 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.513098 kubelet[2746]: I0129 16:21:48.513083 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.513531 kubelet[2746]: I0129 16:21:48.513512 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.515675 kubelet[2746]: I0129 16:21:48.515535 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hostproc" (OuterVolumeSpecName: "hostproc") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.517009 kubelet[2746]: I0129 16:21:48.516986 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8956fda-f110-4260-aa68-ab3633f0b34b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8956fda-f110-4260-aa68-ab3633f0b34b" (UID: "e8956fda-f110-4260-aa68-ab3633f0b34b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:21:48.521380 kubelet[2746]: I0129 16:21:48.517146 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.521380 kubelet[2746]: I0129 16:21:48.517161 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.521380 kubelet[2746]: I0129 16:21:48.517179 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:21:48.523340 kubelet[2746]: I0129 16:21:48.523298 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-kube-api-access-bmt8d" (OuterVolumeSpecName: "kube-api-access-bmt8d") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "kube-api-access-bmt8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:21:48.523582 containerd[1485]: time="2025-01-29T16:21:48.523502540Z" level=info msg="RemoveContainer for \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\" returns successfully" Jan 29 16:21:48.523912 kubelet[2746]: I0129 16:21:48.523867 2746 scope.go:117] "RemoveContainer" containerID="4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c" Jan 29 16:21:48.525048 kubelet[2746]: I0129 16:21:48.524933 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:21:48.526459 containerd[1485]: time="2025-01-29T16:21:48.526405969Z" level=info msg="RemoveContainer for \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\"" Jan 29 16:21:48.527576 kubelet[2746]: I0129 16:21:48.527551 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:21:48.527881 kubelet[2746]: I0129 16:21:48.527777 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bd1631d5-6ffe-4c15-a229-d8bae18e25e8" (UID: "bd1631d5-6ffe-4c15-a229-d8bae18e25e8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:21:48.528090 kubelet[2746]: I0129 16:21:48.528041 2746 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8956fda-f110-4260-aa68-ab3633f0b34b-kube-api-access-cw7rg" (OuterVolumeSpecName: "kube-api-access-cw7rg") pod "e8956fda-f110-4260-aa68-ab3633f0b34b" (UID: "e8956fda-f110-4260-aa68-ab3633f0b34b"). InnerVolumeSpecName "kube-api-access-cw7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:21:48.544106 containerd[1485]: time="2025-01-29T16:21:48.544047445Z" level=info msg="RemoveContainer for \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\" returns successfully" Jan 29 16:21:48.544625 kubelet[2746]: I0129 16:21:48.544567 2746 scope.go:117] "RemoveContainer" containerID="5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b" Jan 29 16:21:48.546246 containerd[1485]: time="2025-01-29T16:21:48.546150230Z" level=info msg="RemoveContainer for \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\"" Jan 29 16:21:48.586963 containerd[1485]: time="2025-01-29T16:21:48.586887514Z" level=info msg="RemoveContainer for \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\" returns successfully" Jan 29 16:21:48.587524 kubelet[2746]: I0129 16:21:48.587442 2746 scope.go:117] "RemoveContainer" containerID="d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f" Jan 29 16:21:48.588011 containerd[1485]: time="2025-01-29T16:21:48.587949015Z" level=error msg="ContainerStatus for \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\": not found" Jan 29 16:21:48.588332 kubelet[2746]: E0129 16:21:48.588245 2746 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\": not found" containerID="d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f" Jan 29 16:21:48.588332 kubelet[2746]: I0129 16:21:48.588275 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f"} err="failed to get container status \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4bca6c80339c8077bbaf3016ff9df7ec6f413c7c7d31c3881526e9ab8e42f3f\": not found" Jan 29 16:21:48.588332 kubelet[2746]: I0129 16:21:48.588297 2746 scope.go:117] "RemoveContainer" containerID="368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e" Jan 29 16:21:48.588853 containerd[1485]: time="2025-01-29T16:21:48.588786788Z" level=error msg="ContainerStatus for \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\": not found" Jan 29 16:21:48.589121 kubelet[2746]: E0129 16:21:48.589014 2746 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\": not found" containerID="368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e" Jan 29 16:21:48.589121 kubelet[2746]: I0129 16:21:48.589050 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e"} err="failed to get container status \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\": rpc error: code = NotFound desc = an error occurred when try to find container \"368f7f1d3384c26b055d064e7f1f2fed02db57bac77dbd6786d4a03ebced978e\": not found" Jan 29 16:21:48.589121 kubelet[2746]: I0129 16:21:48.589065 2746 scope.go:117] "RemoveContainer" containerID="50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826" Jan 29 16:21:48.589556 containerd[1485]: time="2025-01-29T16:21:48.589364393Z" level=error msg="ContainerStatus for \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\": not found" Jan 29 16:21:48.589609 kubelet[2746]: E0129 16:21:48.589519 2746 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\": not found" containerID="50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826" Jan 29 16:21:48.589867 kubelet[2746]: I0129 16:21:48.589666 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826"} err="failed to get container status \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\": rpc error: code = NotFound desc = an error occurred when try to find container \"50bf40c8bd4c362f97c294f16c430c09aaff65147698d6405ed26d7cea32b826\": not found" Jan 29 16:21:48.589867 kubelet[2746]: I0129 16:21:48.589688 2746 scope.go:117] "RemoveContainer" containerID="4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c" Jan 29 16:21:48.590121 containerd[1485]: time="2025-01-29T16:21:48.590013971Z" level=error msg="ContainerStatus for \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\": not found" Jan 29 16:21:48.590445 kubelet[2746]: E0129 16:21:48.590389 2746 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\": not found" containerID="4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c" Jan 29 16:21:48.590524 kubelet[2746]: I0129 16:21:48.590469 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c"} err="failed to get container status \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a222693a5917c6623ec9cc2f5df27c050e460a1f93b4c78bafb0a2bd42bdf7c\": not found" Jan 29 16:21:48.590563 kubelet[2746]: I0129 16:21:48.590531 2746 scope.go:117] "RemoveContainer" containerID="5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b" Jan 29 16:21:48.590981 containerd[1485]: time="2025-01-29T16:21:48.590916601Z" level=error msg="ContainerStatus for \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\": not found" Jan 29 16:21:48.591187 kubelet[2746]: E0129 16:21:48.591113 2746 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\": not found" containerID="5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b" Jan 29 16:21:48.591187 kubelet[2746]: I0129 16:21:48.591155 2746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b"} err="failed to get container status \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c0d84ea75e2e0d47f1789611c744460fc34a01dab7a9234eb12e1fbccce8a7b\": not found" Jan 29 16:21:48.610587 kubelet[2746]: I0129 16:21:48.610569 2746 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-cgroup\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.610745 kubelet[2746]: I0129 16:21:48.610679 2746 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-lib-modules\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.610745 kubelet[2746]: I0129 16:21:48.610696 2746 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cw7rg\" (UniqueName: \"kubernetes.io/projected/e8956fda-f110-4260-aa68-ab3633f0b34b-kube-api-access-cw7rg\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.610745 kubelet[2746]: I0129 16:21:48.610708 2746 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hostproc\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.610745 kubelet[2746]: I0129 16:21:48.610719 2746 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-clustermesh-secrets\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611011 kubelet[2746]: I0129 16:21:48.610729 2746 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-hubble-tls\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611011 kubelet[2746]: I0129 16:21:48.610893 2746 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8956fda-f110-4260-aa68-ab3633f0b34b-cilium-config-path\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611011 kubelet[2746]: I0129 16:21:48.610903 2746 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-xtables-lock\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611011 kubelet[2746]: I0129 16:21:48.610915 2746 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-config-path\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611011 kubelet[2746]: I0129 16:21:48.610924 2746 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cilium-run\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611011 kubelet[2746]: I0129 16:21:48.610937 2746 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-host-proc-sys-net\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611011 kubelet[2746]: I0129 16:21:48.610958 2746 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-bpf-maps\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611204 kubelet[2746]: I0129 16:21:48.610968 2746 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-cni-path\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611204 kubelet[2746]: I0129 16:21:48.610976 2746 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-etc-cni-netd\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611204 kubelet[2746]: I0129 16:21:48.610985 2746 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bmt8d\" (UniqueName: \"kubernetes.io/projected/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-kube-api-access-bmt8d\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.611204 kubelet[2746]: I0129 16:21:48.610996 2746 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd1631d5-6ffe-4c15-a229-d8bae18e25e8-host-proc-sys-kernel\") on node \"ci-4230-0-0-a-7095f58259.novalocal\" DevicePath \"\"" Jan 29 16:21:48.749509 systemd[1]: Removed slice kubepods-besteffort-pode8956fda_f110_4260_aa68_ab3633f0b34b.slice - libcontainer container kubepods-besteffort-pode8956fda_f110_4260_aa68_ab3633f0b34b.slice. Jan 29 16:21:48.758803 systemd[1]: Removed slice kubepods-burstable-podbd1631d5_6ffe_4c15_a229_d8bae18e25e8.slice - libcontainer container kubepods-burstable-podbd1631d5_6ffe_4c15_a229_d8bae18e25e8.slice. Jan 29 16:21:48.759049 systemd[1]: kubepods-burstable-podbd1631d5_6ffe_4c15_a229_d8bae18e25e8.slice: Consumed 8.816s CPU time, 125M memory peak, 152K read from disk, 13.3M written to disk. Jan 29 16:21:48.865511 kubelet[2746]: I0129 16:21:48.864624 2746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1631d5-6ffe-4c15-a229-d8bae18e25e8" path="/var/lib/kubelet/pods/bd1631d5-6ffe-4c15-a229-d8bae18e25e8/volumes" Jan 29 16:21:48.866561 kubelet[2746]: I0129 16:21:48.866522 2746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8956fda-f110-4260-aa68-ab3633f0b34b" path="/var/lib/kubelet/pods/e8956fda-f110-4260-aa68-ab3633f0b34b/volumes" Jan 29 16:21:48.984295 kubelet[2746]: E0129 16:21:48.984119 2746 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:21:49.207860 systemd[1]: var-lib-kubelet-pods-bd1631d5\x2d6ffe\x2d4c15\x2da229\x2dd8bae18e25e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbmt8d.mount: Deactivated successfully. Jan 29 16:21:49.208254 systemd[1]: var-lib-kubelet-pods-bd1631d5\x2d6ffe\x2d4c15\x2da229\x2dd8bae18e25e8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:21:49.208501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4-rootfs.mount: Deactivated successfully. Jan 29 16:21:49.208682 systemd[1]: var-lib-kubelet-pods-e8956fda\x2df110\x2d4260\x2daa68\x2dab3633f0b34b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcw7rg.mount: Deactivated successfully. Jan 29 16:21:49.208868 systemd[1]: var-lib-kubelet-pods-bd1631d5\x2d6ffe\x2d4c15\x2da229\x2dd8bae18e25e8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:21:50.280773 sshd[4329]: Connection closed by 172.24.4.1 port 47576 Jan 29 16:21:50.284763 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:50.301606 systemd[1]: sshd@22-172.24.4.227:22-172.24.4.1:47576.service: Deactivated successfully. Jan 29 16:21:50.307206 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:21:50.310202 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:21:50.323054 systemd[1]: Started sshd@23-172.24.4.227:22-172.24.4.1:47588.service - OpenSSH per-connection server daemon (172.24.4.1:47588). Jan 29 16:21:50.325825 systemd-logind[1468]: Removed session 25. Jan 29 16:21:51.579636 sshd[4493]: Accepted publickey for core from 172.24.4.1 port 47588 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:51.582515 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:51.593534 systemd-logind[1468]: New session 26 of user core. Jan 29 16:21:51.605716 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:21:52.146665 kubelet[2746]: I0129 16:21:52.146611 2746 setters.go:580] "Node became not ready" node="ci-4230-0-0-a-7095f58259.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:21:52Z","lastTransitionTime":"2025-01-29T16:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:21:53.153773 kubelet[2746]: I0129 16:21:53.152479 2746 topology_manager.go:215] "Topology Admit Handler" podUID="9b595471-e90d-4b0e-94bc-75e2c65a7557" podNamespace="kube-system" podName="cilium-9k8mh" Jan 29 16:21:53.153773 kubelet[2746]: E0129 16:21:53.152540 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1631d5-6ffe-4c15-a229-d8bae18e25e8" containerName="apply-sysctl-overwrites" Jan 29 16:21:53.153773 kubelet[2746]: E0129 16:21:53.152551 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1631d5-6ffe-4c15-a229-d8bae18e25e8" containerName="mount-bpf-fs" Jan 29 16:21:53.153773 kubelet[2746]: E0129 16:21:53.152559 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1631d5-6ffe-4c15-a229-d8bae18e25e8" containerName="cilium-agent" Jan 29 16:21:53.153773 kubelet[2746]: E0129 16:21:53.152568 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e8956fda-f110-4260-aa68-ab3633f0b34b" containerName="cilium-operator" Jan 29 16:21:53.153773 kubelet[2746]: E0129 16:21:53.152574 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1631d5-6ffe-4c15-a229-d8bae18e25e8" containerName="mount-cgroup" Jan 29 16:21:53.153773 kubelet[2746]: E0129 16:21:53.152581 2746 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd1631d5-6ffe-4c15-a229-d8bae18e25e8" containerName="clean-cilium-state" Jan 29 16:21:53.153773 kubelet[2746]: I0129 16:21:53.152612 2746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8956fda-f110-4260-aa68-ab3633f0b34b" containerName="cilium-operator" Jan 29 16:21:53.153773 kubelet[2746]: I0129 16:21:53.152619 2746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1631d5-6ffe-4c15-a229-d8bae18e25e8" containerName="cilium-agent" Jan 29 16:21:53.158264 kubelet[2746]: W0129 16:21:53.158234 2746 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:21:53.158516 kubelet[2746]: E0129 16:21:53.158442 2746 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-0-0-a-7095f58259.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-a-7095f58259.novalocal' and this object Jan 29 16:21:53.168871 systemd[1]: Created slice kubepods-burstable-pod9b595471_e90d_4b0e_94bc_75e2c65a7557.slice - libcontainer container kubepods-burstable-pod9b595471_e90d_4b0e_94bc_75e2c65a7557.slice. Jan 29 16:21:53.243016 kubelet[2746]: I0129 16:21:53.242981 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-bpf-maps\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243285 kubelet[2746]: I0129 16:21:53.243248 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b595471-e90d-4b0e-94bc-75e2c65a7557-clustermesh-secrets\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243428 kubelet[2746]: I0129 16:21:53.243412 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-cni-path\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243562 kubelet[2746]: I0129 16:21:53.243546 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-host-proc-sys-kernel\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243685 kubelet[2746]: I0129 16:21:53.243669 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b595471-e90d-4b0e-94bc-75e2c65a7557-hubble-tls\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243843 kubelet[2746]: I0129 16:21:53.243807 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-cilium-cgroup\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243886 kubelet[2746]: I0129 16:21:53.243862 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-lib-modules\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243914 kubelet[2746]: I0129 16:21:53.243889 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b595471-e90d-4b0e-94bc-75e2c65a7557-cilium-config-path\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243941 kubelet[2746]: I0129 16:21:53.243928 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-cilium-run\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243975 kubelet[2746]: I0129 16:21:53.243949 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-hostproc\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.243975 kubelet[2746]: I0129 16:21:53.243968 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-etc-cni-netd\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.244036 kubelet[2746]: I0129 16:21:53.243987 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-xtables-lock\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.244036 kubelet[2746]: I0129 16:21:53.244005 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74wz\" (UniqueName: \"kubernetes.io/projected/9b595471-e90d-4b0e-94bc-75e2c65a7557-kube-api-access-d74wz\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.244036 kubelet[2746]: I0129 16:21:53.244025 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b595471-e90d-4b0e-94bc-75e2c65a7557-cilium-ipsec-secrets\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.244122 kubelet[2746]: I0129 16:21:53.244049 2746 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b595471-e90d-4b0e-94bc-75e2c65a7557-host-proc-sys-net\") pod \"cilium-9k8mh\" (UID: \"9b595471-e90d-4b0e-94bc-75e2c65a7557\") " pod="kube-system/cilium-9k8mh" Jan 29 16:21:53.288381 sshd[4496]: Connection closed by 172.24.4.1 port 47588 Jan 29 16:21:53.289535 sshd-session[4493]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:53.299929 systemd[1]: sshd@23-172.24.4.227:22-172.24.4.1:47588.service: Deactivated successfully. Jan 29 16:21:53.302941 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:21:53.303285 systemd[1]: session-26.scope: Consumed 1.043s CPU time, 23.6M memory peak. Jan 29 16:21:53.306410 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:21:53.311825 systemd[1]: Started sshd@24-172.24.4.227:22-172.24.4.1:47592.service - OpenSSH per-connection server daemon (172.24.4.1:47592). Jan 29 16:21:53.313587 systemd-logind[1468]: Removed session 26. Jan 29 16:21:53.985636 kubelet[2746]: E0129 16:21:53.985535 2746 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:21:54.346095 kubelet[2746]: E0129 16:21:54.345875 2746 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 29 16:21:54.347167 kubelet[2746]: E0129 16:21:54.347095 2746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b595471-e90d-4b0e-94bc-75e2c65a7557-clustermesh-secrets podName:9b595471-e90d-4b0e-94bc-75e2c65a7557 nodeName:}" failed. No retries permitted until 2025-01-29 16:21:54.847033383 +0000 UTC m=+176.191433817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9b595471-e90d-4b0e-94bc-75e2c65a7557-clustermesh-secrets") pod "cilium-9k8mh" (UID: "9b595471-e90d-4b0e-94bc-75e2c65a7557") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:21:54.709806 sshd[4506]: Accepted publickey for core from 172.24.4.1 port 47592 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:54.712602 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:54.725146 systemd-logind[1468]: New session 27 of user core. Jan 29 16:21:54.735652 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:21:54.976576 containerd[1485]: time="2025-01-29T16:21:54.976484884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9k8mh,Uid:9b595471-e90d-4b0e-94bc-75e2c65a7557,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:55.039824 containerd[1485]: time="2025-01-29T16:21:55.038690171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:55.039824 containerd[1485]: time="2025-01-29T16:21:55.039026218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:55.039824 containerd[1485]: time="2025-01-29T16:21:55.039081289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:55.040786 containerd[1485]: time="2025-01-29T16:21:55.040593397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:55.079686 systemd[1]: Started cri-containerd-edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e.scope - libcontainer container edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e. Jan 29 16:21:55.122017 containerd[1485]: time="2025-01-29T16:21:55.121972889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9k8mh,Uid:9b595471-e90d-4b0e-94bc-75e2c65a7557,Namespace:kube-system,Attempt:0,} returns sandbox id \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\"" Jan 29 16:21:55.127553 containerd[1485]: time="2025-01-29T16:21:55.127518485Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:21:55.149070 containerd[1485]: time="2025-01-29T16:21:55.149017994Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"23c1291cceb7bd162f8123659bb099b7fb214f875e0f6d88cb9f21fd78d0a37e\"" Jan 29 16:21:55.149903 containerd[1485]: time="2025-01-29T16:21:55.149868316Z" level=info msg="StartContainer for \"23c1291cceb7bd162f8123659bb099b7fb214f875e0f6d88cb9f21fd78d0a37e\"" Jan 29 16:21:55.174464 systemd[1]: Started cri-containerd-23c1291cceb7bd162f8123659bb099b7fb214f875e0f6d88cb9f21fd78d0a37e.scope - libcontainer container 23c1291cceb7bd162f8123659bb099b7fb214f875e0f6d88cb9f21fd78d0a37e. Jan 29 16:21:55.204548 containerd[1485]: time="2025-01-29T16:21:55.204416426Z" level=info msg="StartContainer for \"23c1291cceb7bd162f8123659bb099b7fb214f875e0f6d88cb9f21fd78d0a37e\" returns successfully" Jan 29 16:21:55.211665 systemd[1]: cri-containerd-23c1291cceb7bd162f8123659bb099b7fb214f875e0f6d88cb9f21fd78d0a37e.scope: Deactivated successfully. Jan 29 16:21:55.255791 containerd[1485]: time="2025-01-29T16:21:55.255114535Z" level=info msg="shim disconnected" id=23c1291cceb7bd162f8123659bb099b7fb214f875e0f6d88cb9f21fd78d0a37e namespace=k8s.io Jan 29 16:21:55.255791 containerd[1485]: time="2025-01-29T16:21:55.255232140Z" level=warning msg="cleaning up after shim disconnected" id=23c1291cceb7bd162f8123659bb099b7fb214f875e0f6d88cb9f21fd78d0a37e namespace=k8s.io Jan 29 16:21:55.255791 containerd[1485]: time="2025-01-29T16:21:55.255261464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:55.272507 containerd[1485]: time="2025-01-29T16:21:55.271613008Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:21:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:21:55.367386 sshd[4513]: Connection closed by 172.24.4.1 port 47592 Jan 29 16:21:55.366111 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:55.387050 systemd[1]: sshd@24-172.24.4.227:22-172.24.4.1:47592.service: Deactivated successfully. Jan 29 16:21:55.393470 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:21:55.399666 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:21:55.405980 systemd[1]: Started sshd@25-172.24.4.227:22-172.24.4.1:57540.service - OpenSSH per-connection server daemon (172.24.4.1:57540). Jan 29 16:21:55.409577 systemd-logind[1468]: Removed session 27. Jan 29 16:21:55.486023 containerd[1485]: time="2025-01-29T16:21:55.485749520Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:21:55.515955 containerd[1485]: time="2025-01-29T16:21:55.515484064Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"29ab3fff2a641de5cd99fc1e62cc3e1d2b01933369aa11632a2ddbb3180eb45b\"" Jan 29 16:21:55.517604 containerd[1485]: time="2025-01-29T16:21:55.516467621Z" level=info msg="StartContainer for \"29ab3fff2a641de5cd99fc1e62cc3e1d2b01933369aa11632a2ddbb3180eb45b\"" Jan 29 16:21:55.552710 systemd[1]: Started cri-containerd-29ab3fff2a641de5cd99fc1e62cc3e1d2b01933369aa11632a2ddbb3180eb45b.scope - libcontainer container 29ab3fff2a641de5cd99fc1e62cc3e1d2b01933369aa11632a2ddbb3180eb45b. Jan 29 16:21:55.594648 containerd[1485]: time="2025-01-29T16:21:55.594598647Z" level=info msg="StartContainer for \"29ab3fff2a641de5cd99fc1e62cc3e1d2b01933369aa11632a2ddbb3180eb45b\" returns successfully" Jan 29 16:21:55.599022 systemd[1]: cri-containerd-29ab3fff2a641de5cd99fc1e62cc3e1d2b01933369aa11632a2ddbb3180eb45b.scope: Deactivated successfully. Jan 29 16:21:55.628431 containerd[1485]: time="2025-01-29T16:21:55.628368282Z" level=info msg="shim disconnected" id=29ab3fff2a641de5cd99fc1e62cc3e1d2b01933369aa11632a2ddbb3180eb45b namespace=k8s.io Jan 29 16:21:55.628622 containerd[1485]: time="2025-01-29T16:21:55.628425718Z" level=warning msg="cleaning up after shim disconnected" id=29ab3fff2a641de5cd99fc1e62cc3e1d2b01933369aa11632a2ddbb3180eb45b namespace=k8s.io Jan 29 16:21:55.628622 containerd[1485]: time="2025-01-29T16:21:55.628455453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:55.868958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472721246.mount: Deactivated successfully. Jan 29 16:21:56.499314 containerd[1485]: time="2025-01-29T16:21:56.499174895Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:21:56.552995 containerd[1485]: time="2025-01-29T16:21:56.552802845Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67\"" Jan 29 16:21:56.555845 containerd[1485]: time="2025-01-29T16:21:56.555776240Z" level=info msg="StartContainer for \"4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67\"" Jan 29 16:21:56.605520 systemd[1]: Started cri-containerd-4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67.scope - libcontainer container 4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67. Jan 29 16:21:56.639687 systemd[1]: cri-containerd-4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67.scope: Deactivated successfully. Jan 29 16:21:56.644449 containerd[1485]: time="2025-01-29T16:21:56.644279005Z" level=info msg="StartContainer for \"4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67\" returns successfully" Jan 29 16:21:56.677361 containerd[1485]: time="2025-01-29T16:21:56.677262813Z" level=info msg="shim disconnected" id=4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67 namespace=k8s.io Jan 29 16:21:56.677740 containerd[1485]: time="2025-01-29T16:21:56.677559138Z" level=warning msg="cleaning up after shim disconnected" id=4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67 namespace=k8s.io Jan 29 16:21:56.677740 containerd[1485]: time="2025-01-29T16:21:56.677582561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:56.871576 systemd[1]: run-containerd-runc-k8s.io-4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67-runc.HqpEQ7.mount: Deactivated successfully. Jan 29 16:21:56.873073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a86f24e090a47009d3f45bec08ebeb310e4259f11cac2c791108d2dd29cfc67-rootfs.mount: Deactivated successfully. Jan 29 16:21:56.891979 sshd[4621]: Accepted publickey for core from 172.24.4.1 port 57540 ssh2: RSA SHA256:Owzcd0XrIr9p693U2T41Wawy5AcZcVn7QuTEUKQxcT4 Jan 29 16:21:56.894900 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:56.907014 systemd-logind[1468]: New session 28 of user core. Jan 29 16:21:56.927792 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 16:21:57.503496 containerd[1485]: time="2025-01-29T16:21:57.503442327Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:21:57.534421 containerd[1485]: time="2025-01-29T16:21:57.534368510Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a\"" Jan 29 16:21:57.535159 containerd[1485]: time="2025-01-29T16:21:57.535074518Z" level=info msg="StartContainer for \"8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a\"" Jan 29 16:21:57.573495 systemd[1]: Started cri-containerd-8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a.scope - libcontainer container 8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a. Jan 29 16:21:57.601251 systemd[1]: cri-containerd-8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a.scope: Deactivated successfully. Jan 29 16:21:57.612228 containerd[1485]: time="2025-01-29T16:21:57.610445030Z" level=info msg="StartContainer for \"8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a\" returns successfully" Jan 29 16:21:57.647844 containerd[1485]: time="2025-01-29T16:21:57.647635481Z" level=info msg="shim disconnected" id=8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a namespace=k8s.io Jan 29 16:21:57.647844 containerd[1485]: time="2025-01-29T16:21:57.647691505Z" level=warning msg="cleaning up after shim disconnected" id=8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a namespace=k8s.io Jan 29 16:21:57.647844 containerd[1485]: time="2025-01-29T16:21:57.647701743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:57.868008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b3803bb11ec5125e616700febbc1848b5a2b3f970a81665910573d5afc60d3a-rootfs.mount: Deactivated successfully. Jan 29 16:21:58.523830 containerd[1485]: time="2025-01-29T16:21:58.523269677Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:21:58.576260 containerd[1485]: time="2025-01-29T16:21:58.576135683Z" level=info msg="CreateContainer within sandbox \"edf7a431e0ad59299042f2b48cc6b43b547d446e66f2760e303a0d18d592cc0e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7fe7b0a58730802ab571bf90953e5bc6c4c750ead2e6b62769ca59989cd2a35\"" Jan 29 16:21:58.581380 containerd[1485]: time="2025-01-29T16:21:58.580440374Z" level=info msg="StartContainer for \"d7fe7b0a58730802ab571bf90953e5bc6c4c750ead2e6b62769ca59989cd2a35\"" Jan 29 16:21:58.635467 systemd[1]: Started cri-containerd-d7fe7b0a58730802ab571bf90953e5bc6c4c750ead2e6b62769ca59989cd2a35.scope - libcontainer container d7fe7b0a58730802ab571bf90953e5bc6c4c750ead2e6b62769ca59989cd2a35. Jan 29 16:21:58.672944 containerd[1485]: time="2025-01-29T16:21:58.672886657Z" level=info msg="StartContainer for \"d7fe7b0a58730802ab571bf90953e5bc6c4c750ead2e6b62769ca59989cd2a35\" returns successfully" Jan 29 16:21:58.873258 containerd[1485]: time="2025-01-29T16:21:58.872700089Z" level=info msg="StopPodSandbox for \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\"" Jan 29 16:21:58.873258 containerd[1485]: time="2025-01-29T16:21:58.872924282Z" level=info msg="TearDown network for sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" successfully" Jan 29 16:21:58.873258 containerd[1485]: time="2025-01-29T16:21:58.873006042Z" level=info msg="StopPodSandbox for \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" returns successfully" Jan 29 16:21:58.875511 containerd[1485]: time="2025-01-29T16:21:58.875144166Z" level=info msg="RemovePodSandbox for \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\"" Jan 29 16:21:58.875511 containerd[1485]: time="2025-01-29T16:21:58.875176175Z" level=info msg="Forcibly stopping sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\"" Jan 29 16:21:58.875511 containerd[1485]: time="2025-01-29T16:21:58.875225967Z" level=info msg="TearDown network for sandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" successfully" Jan 29 16:21:58.897582 containerd[1485]: time="2025-01-29T16:21:58.897382976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:21:58.897582 containerd[1485]: time="2025-01-29T16:21:58.897504129Z" level=info msg="RemovePodSandbox \"fba82cb3fa180dd75c0a5de2f5802df00146da130763c23bca7af461bb7c601a\" returns successfully" Jan 29 16:21:58.899444 containerd[1485]: time="2025-01-29T16:21:58.898495764Z" level=info msg="StopPodSandbox for \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\"" Jan 29 16:21:58.899444 containerd[1485]: time="2025-01-29T16:21:58.898632054Z" level=info msg="TearDown network for sandbox \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\" successfully" Jan 29 16:21:58.899444 containerd[1485]: time="2025-01-29T16:21:58.898645899Z" level=info msg="StopPodSandbox for \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\" returns successfully" Jan 29 16:21:58.899444 containerd[1485]: time="2025-01-29T16:21:58.899087643Z" level=info msg="RemovePodSandbox for \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\"" Jan 29 16:21:58.899444 containerd[1485]: time="2025-01-29T16:21:58.899109042Z" level=info msg="Forcibly stopping sandbox \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\"" Jan 29 16:21:58.899444 containerd[1485]: time="2025-01-29T16:21:58.899222561Z" level=info msg="TearDown network for sandbox \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\" successfully" Jan 29 16:21:58.906182 containerd[1485]: time="2025-01-29T16:21:58.906114282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:21:58.906250 containerd[1485]: time="2025-01-29T16:21:58.906196453Z" level=info msg="RemovePodSandbox \"aae3d219f818ef2faf68276984477e58e22f55caca473e1840236969b36766a4\" returns successfully" Jan 29 16:21:59.065382 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:21:59.116375 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 29 16:21:59.605471 kubelet[2746]: I0129 16:21:59.605408 2746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9k8mh" podStartSLOduration=6.60538996 podStartE2EDuration="6.60538996s" podCreationTimestamp="2025-01-29 16:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:59.604948837 +0000 UTC m=+180.949349211" watchObservedRunningTime="2025-01-29 16:21:59.60538996 +0000 UTC m=+180.949790334" Jan 29 16:22:01.803862 systemd[1]: run-containerd-runc-k8s.io-d7fe7b0a58730802ab571bf90953e5bc6c4c750ead2e6b62769ca59989cd2a35-runc.xrQxZl.mount: Deactivated successfully. Jan 29 16:22:01.850670 kubelet[2746]: E0129 16:22:01.849689 2746 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44286->127.0.0.1:42951: write tcp 127.0.0.1:44286->127.0.0.1:42951: write: broken pipe Jan 29 16:22:02.302859 systemd-networkd[1384]: lxc_health: Link UP Jan 29 16:22:02.305453 systemd-networkd[1384]: lxc_health: Gained carrier Jan 29 16:22:04.132692 systemd-networkd[1384]: lxc_health: Gained IPv6LL Jan 29 16:22:08.747442 sshd[4744]: Connection closed by 172.24.4.1 port 57540 Jan 29 16:22:08.748806 sshd-session[4621]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:08.756006 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. Jan 29 16:22:08.757603 systemd[1]: sshd@25-172.24.4.227:22-172.24.4.1:57540.service: Deactivated successfully. Jan 29 16:22:08.761724 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 16:22:08.766476 systemd-logind[1468]: Removed session 28.