Jan 13 21:49:09.097762 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:49:09.097791 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:49:09.097802 kernel: BIOS-provided physical RAM map: Jan 13 21:49:09.097810 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:49:09.097817 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:49:09.097829 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:49:09.097838 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 21:49:09.097846 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 21:49:09.097854 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:49:09.097861 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:49:09.097869 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 21:49:09.097877 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:49:09.097885 kernel: NX (Execute Disable) protection: active Jan 13 21:49:09.097893 kernel: APIC: Static calls initialized Jan 13 21:49:09.097905 kernel: SMBIOS 3.0.0 present. Jan 13 21:49:09.097914 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 21:49:09.097922 kernel: Hypervisor detected: KVM Jan 13 21:49:09.097930 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:49:09.097938 kernel: kvm-clock: using sched offset of 3870226846 cycles Jan 13 21:49:09.097950 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:49:09.097958 kernel: tsc: Detected 1996.249 MHz processor Jan 13 21:49:09.097967 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:49:09.097976 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:49:09.097984 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 21:49:09.097993 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:49:09.098001 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:49:09.098009 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 21:49:09.098018 kernel: ACPI: Early table checksum verification disabled Jan 13 21:49:09.098030 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 21:49:09.098039 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:49:09.098047 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:49:09.098056 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:49:09.098064 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 21:49:09.098072 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:49:09.098081 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:49:09.098089 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 21:49:09.098139 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 21:49:09.098152 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 21:49:09.098161 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 21:49:09.098169 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 21:49:09.098182 kernel: No NUMA configuration found Jan 13 21:49:09.098191 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 21:49:09.098200 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 21:49:09.098211 kernel: Zone ranges: Jan 13 21:49:09.098220 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:49:09.098229 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:49:09.098237 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:49:09.098246 kernel: Movable zone start for each node Jan 13 21:49:09.098255 kernel: Early memory node ranges Jan 13 21:49:09.098263 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:49:09.098272 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 21:49:09.098282 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:49:09.098292 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 21:49:09.098300 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:49:09.098309 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:49:09.098318 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 21:49:09.098327 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:49:09.098335 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:49:09.098344 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:49:09.098353 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:49:09.098364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:49:09.098372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:49:09.098381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:49:09.098389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:49:09.098398 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:49:09.098407 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:49:09.098415 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:49:09.098424 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 21:49:09.098433 kernel: Booting paravirtualized kernel on KVM Jan 13 21:49:09.098444 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:49:09.098453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:49:09.098461 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:49:09.098470 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:49:09.098478 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:49:09.098487 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 21:49:09.098497 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:49:09.098507 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:49:09.098517 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:49:09.098526 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:49:09.098535 kernel: Fallback order for Node 0: 0 Jan 13 21:49:09.098544 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 21:49:09.098552 kernel: Policy zone: Normal Jan 13 21:49:09.098561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:49:09.098569 kernel: software IO TLB: area num 2. Jan 13 21:49:09.098579 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 13 21:49:09.098588 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:49:09.098598 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:49:09.098607 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:49:09.098616 kernel: Dynamic Preempt: voluntary Jan 13 21:49:09.098624 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:49:09.098637 kernel: rcu: RCU event tracing is enabled. Jan 13 21:49:09.098646 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:49:09.098655 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:49:09.098664 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:49:09.098674 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:49:09.098684 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:49:09.098693 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:49:09.098702 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:49:09.098710 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:49:09.098719 kernel: Console: colour VGA+ 80x25 Jan 13 21:49:09.098727 kernel: printk: console [tty0] enabled Jan 13 21:49:09.098736 kernel: printk: console [ttyS0] enabled Jan 13 21:49:09.098744 kernel: ACPI: Core revision 20230628 Jan 13 21:49:09.098753 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:49:09.098765 kernel: x2apic enabled Jan 13 21:49:09.098773 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:49:09.098782 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:49:09.098791 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:49:09.098799 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 21:49:09.098808 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 21:49:09.098817 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 21:49:09.098826 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:49:09.098834 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:49:09.098843 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:49:09.098856 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:49:09.098864 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:49:09.098873 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 21:49:09.098881 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:49:09.098898 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:49:09.098910 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:49:09.098919 kernel: landlock: Up and running. Jan 13 21:49:09.098928 kernel: SELinux: Initializing. Jan 13 21:49:09.098937 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:49:09.098947 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:49:09.098956 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 21:49:09.098968 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:49:09.098977 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:49:09.098987 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:49:09.098996 kernel: Performance Events: AMD PMU driver. Jan 13 21:49:09.099005 kernel: ... version: 0 Jan 13 21:49:09.099016 kernel: ... bit width: 48 Jan 13 21:49:09.099025 kernel: ... generic registers: 4 Jan 13 21:49:09.099034 kernel: ... value mask: 0000ffffffffffff Jan 13 21:49:09.099044 kernel: ... max period: 00007fffffffffff Jan 13 21:49:09.099053 kernel: ... fixed-purpose events: 0 Jan 13 21:49:09.099062 kernel: ... event mask: 000000000000000f Jan 13 21:49:09.099071 kernel: signal: max sigframe size: 1440 Jan 13 21:49:09.099080 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:49:09.099089 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:49:09.099116 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:49:09.099125 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:49:09.099134 kernel: .... node #0, CPUs: #1 Jan 13 21:49:09.099143 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:49:09.099152 kernel: smpboot: Max logical packages: 2 Jan 13 21:49:09.099161 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 21:49:09.099171 kernel: devtmpfs: initialized Jan 13 21:49:09.099180 kernel: x86/mm: Memory block size: 128MB Jan 13 21:49:09.099189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:49:09.099201 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:49:09.099210 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:49:09.099219 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:49:09.099228 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:49:09.099238 kernel: audit: type=2000 audit(1736804947.964:1): state=initialized audit_enabled=0 res=1 Jan 13 21:49:09.099247 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:49:09.099256 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:49:09.099265 kernel: cpuidle: using governor menu Jan 13 21:49:09.099274 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:49:09.099285 kernel: dca service started, version 1.12.1 Jan 13 21:49:09.099294 kernel: PCI: Using configuration type 1 for base access Jan 13 21:49:09.099304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:49:09.099313 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:49:09.099323 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:49:09.099332 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:49:09.099341 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:49:09.099350 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:49:09.099359 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:49:09.099371 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:49:09.099380 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:49:09.099389 kernel: ACPI: Interpreter enabled Jan 13 21:49:09.099398 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:49:09.099407 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:49:09.099417 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:49:09.099426 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:49:09.099435 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 21:49:09.099444 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:49:09.099605 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:49:09.099711 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:49:09.099807 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:49:09.099821 kernel: acpiphp: Slot [3] registered Jan 13 21:49:09.099831 kernel: acpiphp: Slot [4] registered Jan 13 21:49:09.099840 kernel: acpiphp: Slot [5] registered Jan 13 21:49:09.099849 kernel: acpiphp: Slot [6] registered Jan 13 21:49:09.099858 kernel: acpiphp: Slot [7] registered Jan 13 21:49:09.099871 kernel: acpiphp: Slot [8] registered Jan 13 21:49:09.099880 kernel: acpiphp: Slot [9] registered Jan 13 21:49:09.099889 kernel: acpiphp: Slot [10] registered Jan 13 21:49:09.099898 kernel: acpiphp: Slot [11] registered Jan 13 21:49:09.099907 kernel: acpiphp: Slot [12] registered Jan 13 21:49:09.099916 kernel: acpiphp: Slot [13] registered Jan 13 21:49:09.099925 kernel: acpiphp: Slot [14] registered Jan 13 21:49:09.099934 kernel: acpiphp: Slot [15] registered Jan 13 21:49:09.099943 kernel: acpiphp: Slot [16] registered Jan 13 21:49:09.099954 kernel: acpiphp: Slot [17] registered Jan 13 21:49:09.099963 kernel: acpiphp: Slot [18] registered Jan 13 21:49:09.099972 kernel: acpiphp: Slot [19] registered Jan 13 21:49:09.099982 kernel: acpiphp: Slot [20] registered Jan 13 21:49:09.099990 kernel: acpiphp: Slot [21] registered Jan 13 21:49:09.100000 kernel: acpiphp: Slot [22] registered Jan 13 21:49:09.100009 kernel: acpiphp: Slot [23] registered Jan 13 21:49:09.100018 kernel: acpiphp: Slot [24] registered Jan 13 21:49:09.100027 kernel: acpiphp: Slot [25] registered Jan 13 21:49:09.100036 kernel: acpiphp: Slot [26] registered Jan 13 21:49:09.100047 kernel: acpiphp: Slot [27] registered Jan 13 21:49:09.100057 kernel: acpiphp: Slot [28] registered Jan 13 21:49:09.100066 kernel: acpiphp: Slot [29] registered Jan 13 21:49:09.100075 kernel: acpiphp: Slot [30] registered Jan 13 21:49:09.100084 kernel: acpiphp: Slot [31] registered Jan 13 21:49:09.100122 kernel: PCI host bridge to bus 0000:00 Jan 13 21:49:09.100233 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:49:09.100323 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:49:09.100430 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:49:09.100513 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:49:09.100592 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 21:49:09.100670 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:49:09.100781 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:49:09.100885 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:49:09.100997 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 21:49:09.101088 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 21:49:09.102452 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 21:49:09.102546 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 21:49:09.102637 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 21:49:09.102728 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 21:49:09.102829 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:49:09.102930 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 21:49:09.103022 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 21:49:09.103154 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 21:49:09.103247 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 21:49:09.103337 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 21:49:09.103425 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 21:49:09.103522 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 21:49:09.103616 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:49:09.103718 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:49:09.103812 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 21:49:09.103903 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 21:49:09.103992 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 21:49:09.104083 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 21:49:09.107220 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:49:09.107321 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:49:09.107410 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 21:49:09.107497 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 21:49:09.107595 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 21:49:09.107686 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 21:49:09.107776 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 21:49:09.107880 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:49:09.107970 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 21:49:09.108058 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 21:49:09.108173 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 21:49:09.108187 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:49:09.108196 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:49:09.108205 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:49:09.108214 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:49:09.108227 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:49:09.108236 kernel: iommu: Default domain type: Translated Jan 13 21:49:09.108245 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:49:09.108254 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:49:09.108263 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:49:09.108272 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:49:09.108281 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 21:49:09.114788 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 21:49:09.114891 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 21:49:09.114988 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:49:09.115002 kernel: vgaarb: loaded Jan 13 21:49:09.115011 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:49:09.115020 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:49:09.115029 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:49:09.115038 kernel: pnp: PnP ACPI init Jan 13 21:49:09.115178 kernel: pnp 00:03: [dma 2] Jan 13 21:49:09.115194 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:49:09.115203 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:49:09.115217 kernel: NET: Registered PF_INET protocol family Jan 13 21:49:09.115226 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:49:09.115235 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:49:09.115244 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:49:09.115253 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:49:09.115262 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:49:09.115271 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:49:09.115280 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:49:09.115291 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:49:09.115300 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:49:09.115308 kernel: NET: Registered PF_XDP protocol family Jan 13 21:49:09.115398 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:49:09.115480 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:49:09.115558 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:49:09.115637 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 21:49:09.115716 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 21:49:09.115809 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 21:49:09.115911 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:49:09.115925 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:49:09.115934 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:49:09.115943 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 21:49:09.115951 kernel: Initialise system trusted keyrings Jan 13 21:49:09.115960 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:49:09.115969 kernel: Key type asymmetric registered Jan 13 21:49:09.115978 kernel: Asymmetric key parser 'x509' registered Jan 13 21:49:09.115990 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:49:09.115999 kernel: io scheduler mq-deadline registered Jan 13 21:49:09.116008 kernel: io scheduler kyber registered Jan 13 21:49:09.116017 kernel: io scheduler bfq registered Jan 13 21:49:09.116026 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:49:09.116036 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 21:49:09.116045 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:49:09.116054 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:49:09.116063 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:49:09.116074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:49:09.116083 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:49:09.116091 kernel: random: crng init done Jan 13 21:49:09.116116 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:49:09.116125 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:49:09.116134 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:49:09.116230 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:49:09.116245 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:49:09.116329 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:49:09.116413 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:49:08 UTC (1736804948) Jan 13 21:49:09.116494 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 21:49:09.116507 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:49:09.116517 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:49:09.116525 kernel: Segment Routing with IPv6 Jan 13 21:49:09.116534 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:49:09.116543 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:49:09.116551 kernel: Key type dns_resolver registered Jan 13 21:49:09.116564 kernel: IPI shorthand broadcast: enabled Jan 13 21:49:09.116573 kernel: sched_clock: Marking stable (983008736, 169674793)->(1189671989, -36988460) Jan 13 21:49:09.116582 kernel: registered taskstats version 1 Jan 13 21:49:09.116591 kernel: Loading compiled-in X.509 certificates Jan 13 21:49:09.116599 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:49:09.116608 kernel: Key type .fscrypt registered Jan 13 21:49:09.116616 kernel: Key type fscrypt-provisioning registered Jan 13 21:49:09.116625 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:49:09.116636 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:49:09.116644 kernel: ima: No architecture policies found Jan 13 21:49:09.116653 kernel: clk: Disabling unused clocks Jan 13 21:49:09.116662 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:49:09.116671 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:49:09.116680 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:49:09.116689 kernel: Run /init as init process Jan 13 21:49:09.116697 kernel: with arguments: Jan 13 21:49:09.116706 kernel: /init Jan 13 21:49:09.116714 kernel: with environment: Jan 13 21:49:09.116724 kernel: HOME=/ Jan 13 21:49:09.116733 kernel: TERM=linux Jan 13 21:49:09.116741 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:49:09.116753 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:49:09.116765 systemd[1]: Detected virtualization kvm. Jan 13 21:49:09.116775 systemd[1]: Detected architecture x86-64. Jan 13 21:49:09.116784 systemd[1]: Running in initrd. Jan 13 21:49:09.116795 systemd[1]: No hostname configured, using default hostname. Jan 13 21:49:09.116805 systemd[1]: Hostname set to . Jan 13 21:49:09.116814 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:49:09.116823 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:49:09.116833 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:49:09.116842 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:49:09.116853 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:49:09.116872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:49:09.116883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:49:09.116894 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:49:09.116905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:49:09.116915 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:49:09.116927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:49:09.116937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:49:09.116946 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:49:09.116956 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:49:09.116965 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:49:09.116975 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:49:09.116984 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:49:09.116994 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:49:09.117004 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:49:09.117015 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:49:09.117025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:49:09.117034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:49:09.117044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:49:09.117054 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:49:09.117063 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:49:09.117073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:49:09.117083 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:49:09.117116 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:49:09.117127 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:49:09.117137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:49:09.117147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:49:09.117156 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:49:09.117166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:49:09.117175 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:49:09.117189 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:49:09.117222 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 21:49:09.117252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:49:09.117263 systemd-journald[184]: Journal started Jan 13 21:49:09.117298 systemd-journald[184]: Runtime Journal (/run/log/journal/5e429e489159447e8bd042c4e1ec3d93) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:49:09.076547 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 21:49:09.154984 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:49:09.155009 kernel: Bridge firewalling registered Jan 13 21:49:09.155021 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:49:09.121136 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 21:49:09.155678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:49:09.156645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:49:09.164244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:49:09.167234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:49:09.168288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:49:09.179269 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:49:09.190793 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:49:09.193132 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:49:09.193909 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:49:09.195227 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:49:09.204254 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:49:09.210245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:49:09.220465 dracut-cmdline[218]: dracut-dracut-053 Jan 13 21:49:09.225429 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:49:09.247725 systemd-resolved[222]: Positive Trust Anchors: Jan 13 21:49:09.247742 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:49:09.247784 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:49:09.250707 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 13 21:49:09.251592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:49:09.252994 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:49:09.314201 kernel: SCSI subsystem initialized Jan 13 21:49:09.325155 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:49:09.338442 kernel: iscsi: registered transport (tcp) Jan 13 21:49:09.361160 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:49:09.361235 kernel: QLogic iSCSI HBA Driver Jan 13 21:49:09.423536 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:49:09.430452 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:49:09.476982 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:49:09.477086 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:49:09.479171 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:49:09.537199 kernel: raid6: sse2x4 gen() 12956 MB/s Jan 13 21:49:09.555193 kernel: raid6: sse2x2 gen() 14655 MB/s Jan 13 21:49:09.573536 kernel: raid6: sse2x1 gen() 9930 MB/s Jan 13 21:49:09.573653 kernel: raid6: using algorithm sse2x2 gen() 14655 MB/s Jan 13 21:49:09.592630 kernel: raid6: .... xor() 9434 MB/s, rmw enabled Jan 13 21:49:09.592695 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 21:49:09.615533 kernel: xor: measuring software checksum speed Jan 13 21:49:09.615631 kernel: prefetch64-sse : 17317 MB/sec Jan 13 21:49:09.616959 kernel: generic_sse : 15068 MB/sec Jan 13 21:49:09.617016 kernel: xor: using function: prefetch64-sse (17317 MB/sec) Jan 13 21:49:09.798166 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:49:09.814744 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:49:09.821415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:49:09.835000 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 13 21:49:09.839521 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:49:09.852438 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:49:09.872591 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 13 21:49:09.915602 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:49:09.920425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:49:09.985627 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:49:09.994293 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:49:10.008529 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:49:10.009986 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:49:10.011687 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:49:10.013487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:49:10.021255 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:49:10.039421 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:49:10.070157 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 21:49:10.086830 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 21:49:10.086978 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:49:10.086992 kernel: GPT:17805311 != 20971519 Jan 13 21:49:10.087003 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:49:10.087014 kernel: GPT:17805311 != 20971519 Jan 13 21:49:10.087024 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:49:10.087035 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:49:10.099815 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:49:10.099979 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:49:10.102866 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:49:10.103655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:49:10.103793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:49:10.105984 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:49:10.115302 kernel: libata version 3.00 loaded. Jan 13 21:49:10.119165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:49:10.129116 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (449) Jan 13 21:49:10.130483 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 21:49:10.139943 kernel: scsi host0: ata_piix Jan 13 21:49:10.140077 kernel: scsi host1: ata_piix Jan 13 21:49:10.140217 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 21:49:10.140231 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 21:49:10.152217 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Jan 13 21:49:10.151683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:49:10.167196 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:49:10.198009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:49:10.203411 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:49:10.204006 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:49:10.210462 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:49:10.217308 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:49:10.220076 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:49:10.232073 disk-uuid[503]: Primary Header is updated. Jan 13 21:49:10.232073 disk-uuid[503]: Secondary Entries is updated. Jan 13 21:49:10.232073 disk-uuid[503]: Secondary Header is updated. Jan 13 21:49:10.243203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:49:10.244648 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:49:10.250374 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:49:11.265152 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:49:11.266743 disk-uuid[507]: The operation has completed successfully. Jan 13 21:49:11.339203 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:49:11.339454 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:49:11.376232 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:49:11.392248 sh[525]: Success Jan 13 21:49:11.414136 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 21:49:11.473085 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:49:11.482262 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:49:11.483072 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:49:11.517491 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:49:11.517574 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:49:11.521034 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:49:11.524693 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:49:11.527498 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:49:11.547556 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:49:11.549651 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:49:11.562389 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:49:11.568355 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:49:11.588152 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:49:11.588224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:49:11.592134 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:49:11.602174 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:49:11.621500 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:49:11.625914 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:49:11.640436 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:49:11.653513 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:49:11.734112 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:49:11.742349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:49:11.767231 systemd-networkd[709]: lo: Link UP Jan 13 21:49:11.767242 systemd-networkd[709]: lo: Gained carrier Jan 13 21:49:11.768427 systemd-networkd[709]: Enumeration completed Jan 13 21:49:11.769203 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:49:11.769206 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:49:11.770230 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:49:11.770893 systemd[1]: Reached target network.target - Network. Jan 13 21:49:11.771360 systemd-networkd[709]: eth0: Link UP Jan 13 21:49:11.771364 systemd-networkd[709]: eth0: Gained carrier Jan 13 21:49:11.771373 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:49:11.783600 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.62/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:49:11.809134 ignition[629]: Ignition 2.19.0 Jan 13 21:49:11.809159 ignition[629]: Stage: fetch-offline Jan 13 21:49:11.811436 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:49:11.809259 ignition[629]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:49:11.809289 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:49:11.809415 ignition[629]: parsed url from cmdline: "" Jan 13 21:49:11.809419 ignition[629]: no config URL provided Jan 13 21:49:11.809426 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:49:11.809435 ignition[629]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:49:11.809442 ignition[629]: failed to fetch config: resource requires networking Jan 13 21:49:11.809827 ignition[629]: Ignition finished successfully Jan 13 21:49:11.818305 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:49:11.832179 ignition[720]: Ignition 2.19.0 Jan 13 21:49:11.832192 ignition[720]: Stage: fetch Jan 13 21:49:11.832388 ignition[720]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:49:11.832401 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:49:11.832507 ignition[720]: parsed url from cmdline: "" Jan 13 21:49:11.832511 ignition[720]: no config URL provided Jan 13 21:49:11.832517 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:49:11.832526 ignition[720]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:49:11.832660 ignition[720]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 21:49:11.832744 ignition[720]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 21:49:11.832780 ignition[720]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 21:49:12.014682 ignition[720]: GET result: OK Jan 13 21:49:12.014850 ignition[720]: parsing config with SHA512: e22dcdbe4d34162b41413f928151b40431277614ebff61951ad00694be20322ac81894570a29fbd56ebb50bd3d4dc632642b643e2cb6e2ab8b2bf9b97ae00542 Jan 13 21:49:12.024625 unknown[720]: fetched base config from "system" Jan 13 21:49:12.024651 unknown[720]: fetched base config from "system" Jan 13 21:49:12.025765 ignition[720]: fetch: fetch complete Jan 13 21:49:12.024666 unknown[720]: fetched user config from "openstack" Jan 13 21:49:12.025778 ignition[720]: fetch: fetch passed Jan 13 21:49:12.028189 systemd-resolved[222]: Detected conflict on linux IN A 172.24.4.62 Jan 13 21:49:12.025865 ignition[720]: Ignition finished successfully Jan 13 21:49:12.028205 systemd-resolved[222]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 13 21:49:12.029340 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:49:12.040465 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:49:12.075214 ignition[726]: Ignition 2.19.0 Jan 13 21:49:12.075242 ignition[726]: Stage: kargs Jan 13 21:49:12.075667 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:49:12.075694 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:49:12.078288 ignition[726]: kargs: kargs passed Jan 13 21:49:12.080477 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:49:12.078392 ignition[726]: Ignition finished successfully Jan 13 21:49:12.091445 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:49:12.129956 ignition[732]: Ignition 2.19.0 Jan 13 21:49:12.129978 ignition[732]: Stage: disks Jan 13 21:49:12.130389 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:49:12.130413 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:49:12.136512 ignition[732]: disks: disks passed Jan 13 21:49:12.136606 ignition[732]: Ignition finished successfully Jan 13 21:49:12.138859 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:49:12.142140 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:49:12.145213 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:49:12.146642 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:49:12.149555 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:49:12.151999 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:49:12.160345 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:49:12.192975 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:49:12.202319 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:49:12.215962 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:49:12.370495 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:49:12.371129 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:49:12.372252 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:49:12.379179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:49:12.389218 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:49:12.392777 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:49:12.395460 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 21:49:12.397038 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:49:12.397074 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:49:12.401521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:49:12.407135 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (748) Jan 13 21:49:12.417142 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:49:12.417173 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:49:12.417186 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:49:12.418180 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:49:12.440221 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:49:12.444948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:49:12.649725 initrd-setup-root[777]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:49:12.698034 initrd-setup-root[784]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:49:12.724992 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:49:12.764234 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:49:12.959509 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:49:12.969341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:49:12.978500 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:49:12.996908 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:49:13.000369 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:49:13.029945 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:49:13.039071 ignition[866]: INFO : Ignition 2.19.0 Jan 13 21:49:13.039071 ignition[866]: INFO : Stage: mount Jan 13 21:49:13.042303 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:49:13.042303 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:49:13.042303 ignition[866]: INFO : mount: mount passed Jan 13 21:49:13.042303 ignition[866]: INFO : Ignition finished successfully Jan 13 21:49:13.042978 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:49:13.611447 systemd-networkd[709]: eth0: Gained IPv6LL Jan 13 21:49:19.887497 coreos-metadata[750]: Jan 13 21:49:19.887 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:49:19.929450 coreos-metadata[750]: Jan 13 21:49:19.929 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:49:19.946174 coreos-metadata[750]: Jan 13 21:49:19.945 INFO Fetch successful Jan 13 21:49:19.946174 coreos-metadata[750]: Jan 13 21:49:19.946 INFO wrote hostname ci-4081-3-0-d-9566454817.novalocal to /sysroot/etc/hostname Jan 13 21:49:19.949853 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 21:49:19.950224 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 21:49:19.964376 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:49:20.012853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:49:20.032202 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) Jan 13 21:49:20.039951 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:49:20.040040 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:49:20.044293 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:49:20.055191 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:49:20.060850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:49:20.109229 ignition[901]: INFO : Ignition 2.19.0 Jan 13 21:49:20.109229 ignition[901]: INFO : Stage: files Jan 13 21:49:20.112597 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:49:20.112597 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:49:20.112597 ignition[901]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:49:20.118614 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:49:20.118614 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:49:20.122856 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:49:20.122856 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:49:20.122856 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:49:20.122000 unknown[901]: wrote ssh authorized keys file for user: core Jan 13 21:49:20.130525 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:49:20.130525 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:49:20.130525 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:49:20.130525 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:49:20.186620 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:49:20.495845 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:49:20.495845 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:49:20.495845 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:49:21.160151 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 21:49:21.712461 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:49:21.712461 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:49:21.717231 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:49:22.049539 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 21:49:23.645473 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:49:23.645473 ignition[901]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 21:49:23.650261 ignition[901]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:49:23.653820 ignition[901]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:49:23.653820 ignition[901]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 21:49:23.653820 ignition[901]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 21:49:23.653820 ignition[901]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:49:23.653820 ignition[901]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:49:23.653820 ignition[901]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 21:49:23.653820 ignition[901]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:49:23.653820 ignition[901]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:49:23.653820 ignition[901]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:49:23.653820 ignition[901]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:49:23.653820 ignition[901]: INFO : files: files passed Jan 13 21:49:23.653820 ignition[901]: INFO : Ignition finished successfully Jan 13 21:49:23.653168 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:49:23.663943 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:49:23.667293 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:49:23.674269 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:49:23.674383 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:49:23.691811 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:49:23.693620 initrd-setup-root-after-ignition[929]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:49:23.694497 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:49:23.697002 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:49:23.698983 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:49:23.704265 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:49:23.747042 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:49:23.747379 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:49:23.749749 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:49:23.751515 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:49:23.753724 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:49:23.760369 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:49:23.779462 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:49:23.788414 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:49:23.805595 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:49:23.806391 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:49:23.808862 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:49:23.811055 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:49:23.811226 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:49:23.813756 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:49:23.814907 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:49:23.817165 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:49:23.819147 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:49:23.820990 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:49:23.823339 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:49:23.825520 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:49:23.827827 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:49:23.830126 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:49:23.832369 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:49:23.834478 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:49:23.834616 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:49:23.837152 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:49:23.838401 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:49:23.839483 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:49:23.839861 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:49:23.840859 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:49:23.840973 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:49:23.842539 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:49:23.842665 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:49:23.843471 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:49:23.843623 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:49:23.851712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:49:23.852325 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:49:23.852508 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:49:23.856356 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:49:23.856963 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:49:23.857219 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:49:23.858023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:49:23.858298 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:49:23.864004 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:49:23.864752 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:49:23.873682 ignition[954]: INFO : Ignition 2.19.0 Jan 13 21:49:23.873682 ignition[954]: INFO : Stage: umount Jan 13 21:49:23.873682 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:49:23.873682 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:49:23.877728 ignition[954]: INFO : umount: umount passed Jan 13 21:49:23.877728 ignition[954]: INFO : Ignition finished successfully Jan 13 21:49:23.877054 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:49:23.877204 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:49:23.878903 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:49:23.878988 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:49:23.880006 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:49:23.880048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:49:23.881065 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:49:23.881131 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:49:23.882212 systemd[1]: Stopped target network.target - Network. Jan 13 21:49:23.883225 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:49:23.883273 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:49:23.884391 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:49:23.885391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:49:23.889179 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:49:23.890021 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:49:23.891227 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:49:23.892544 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:49:23.892585 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:49:23.893620 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:49:23.893656 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:49:23.894841 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:49:23.894887 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:49:23.898054 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:49:23.898118 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:49:23.899259 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:49:23.900311 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:49:23.903148 systemd-networkd[709]: eth0: DHCPv6 lease lost Jan 13 21:49:23.904800 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:49:23.904903 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:49:23.909797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:49:23.909834 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:49:23.919528 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:49:23.920064 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:49:23.920142 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:49:23.920854 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:49:23.922202 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:49:23.922292 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:49:23.937404 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:49:23.937495 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:49:23.938924 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:49:23.938989 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:49:23.940188 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:49:23.940235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:49:23.943956 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:49:23.944605 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:49:23.944785 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:49:23.947883 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:49:23.948937 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:49:23.950621 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:49:23.950735 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:49:23.952698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:49:23.952752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:49:23.954001 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:49:23.954036 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:49:23.955006 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:49:23.955052 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:49:23.956605 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:49:23.956648 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:49:23.957898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:49:23.957958 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:49:23.959237 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:49:23.959279 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:49:23.968446 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:49:23.970439 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:49:23.970517 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:49:23.971203 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:49:23.971254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:49:23.977298 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:49:23.977423 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:49:23.979361 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:49:23.987285 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:49:23.996828 systemd[1]: Switching root. Jan 13 21:49:24.028524 systemd-journald[184]: Journal stopped Jan 13 21:49:25.999564 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 21:49:25.999668 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:49:25.999686 kernel: SELinux: policy capability open_perms=1 Jan 13 21:49:25.999698 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:49:25.999710 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:49:25.999722 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:49:25.999734 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:49:25.999750 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:49:25.999762 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:49:25.999780 systemd[1]: Successfully loaded SELinux policy in 74.081ms. Jan 13 21:49:25.999805 kernel: audit: type=1403 audit(1736804964.945:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:49:25.999818 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.954ms. Jan 13 21:49:25.999833 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:49:25.999847 systemd[1]: Detected virtualization kvm. Jan 13 21:49:25.999860 systemd[1]: Detected architecture x86-64. Jan 13 21:49:25.999878 systemd[1]: Detected first boot. Jan 13 21:49:25.999890 systemd[1]: Hostname set to . Jan 13 21:49:25.999903 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:49:25.999915 zram_generator::config[1013]: No configuration found. Jan 13 21:49:25.999929 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:49:25.999942 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:49:25.999955 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:49:25.999969 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:49:25.999982 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:49:25.999997 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:49:26.000010 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:49:26.000023 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:49:26.000036 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:49:26.000049 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:49:26.000062 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:49:26.000074 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:49:26.000087 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:49:26.000116 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:49:26.000133 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:49:26.000146 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:49:26.000159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:49:26.000179 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:49:26.000192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:49:26.000205 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:49:26.000217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:49:26.000233 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:49:26.000245 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:49:26.000258 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:49:26.000274 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:49:26.000286 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:49:26.000299 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:49:26.000312 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:49:26.000324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:49:26.000339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:49:26.000351 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:49:26.000364 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:49:26.000377 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:49:26.000394 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:49:26.000407 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:49:26.000420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:49:26.000432 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:49:26.000445 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:49:26.000462 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:49:26.000476 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:49:26.000489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:49:26.000502 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:49:26.000515 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:49:26.000528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:49:26.000541 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:49:26.000554 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:49:26.000566 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:49:26.000582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:49:26.000595 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:49:26.000608 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 21:49:26.000627 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 21:49:26.000640 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:49:26.000652 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:49:26.000665 kernel: ACPI: bus type drm_connector registered Jan 13 21:49:26.000677 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:49:26.000691 kernel: fuse: init (API version 7.39) Jan 13 21:49:26.000704 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:49:26.000716 kernel: loop: module loaded Jan 13 21:49:26.000763 systemd-journald[1132]: Collecting audit messages is disabled. Jan 13 21:49:26.000792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:49:26.000807 systemd-journald[1132]: Journal started Jan 13 21:49:26.000836 systemd-journald[1132]: Runtime Journal (/run/log/journal/5e429e489159447e8bd042c4e1ec3d93) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:49:26.009069 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:49:26.012804 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:49:26.013716 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:49:26.014433 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:49:26.015046 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:49:26.015639 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:49:26.016237 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:49:26.016815 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:49:26.017617 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:49:26.018466 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:49:26.019318 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:49:26.019484 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:49:26.020290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:49:26.020433 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:49:26.021378 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:49:26.021523 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:49:26.022420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:49:26.022559 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:49:26.023324 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:49:26.023471 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:49:26.024275 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:49:26.026270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:49:26.027137 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:49:26.029890 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:49:26.031731 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:49:26.042590 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:49:26.047281 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:49:26.053207 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:49:26.055180 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:49:26.064257 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:49:26.072277 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:49:26.073194 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:49:26.079704 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:49:26.080357 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:49:26.086822 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:49:26.096262 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:49:26.103203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:49:26.109272 systemd-journald[1132]: Time spent on flushing to /var/log/journal/5e429e489159447e8bd042c4e1ec3d93 is 28.810ms for 937 entries. Jan 13 21:49:26.109272 systemd-journald[1132]: System Journal (/var/log/journal/5e429e489159447e8bd042c4e1ec3d93) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:49:26.152541 systemd-journald[1132]: Received client request to flush runtime journal. Jan 13 21:49:26.111290 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:49:26.114362 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:49:26.115408 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:49:26.121918 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:49:26.133341 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:49:26.141514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:49:26.154764 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:49:26.161465 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:49:26.165579 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 13 21:49:26.165599 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 13 21:49:26.170574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:49:26.177457 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:49:26.211679 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:49:26.221851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:49:26.236408 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 13 21:49:26.236431 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 13 21:49:26.240629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:49:26.843899 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:49:26.853415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:49:26.877035 systemd-udevd[1198]: Using default interface naming scheme 'v255'. Jan 13 21:49:26.911454 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:49:26.927505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:49:26.977604 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:49:26.982623 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 21:49:27.042601 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1204) Jan 13 21:49:27.077198 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:49:27.084242 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:49:27.090871 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:49:27.118170 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 21:49:27.153360 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:49:27.183122 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:49:27.191054 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:49:27.201150 systemd-networkd[1210]: lo: Link UP Jan 13 21:49:27.201160 systemd-networkd[1210]: lo: Gained carrier Jan 13 21:49:27.202464 systemd-networkd[1210]: Enumeration completed Jan 13 21:49:27.203382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:49:27.204204 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:49:27.207271 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:49:27.207281 systemd-networkd[1210]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:49:27.210362 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:49:27.210400 systemd-networkd[1210]: eth0: Link UP Jan 13 21:49:27.210406 systemd-networkd[1210]: eth0: Gained carrier Jan 13 21:49:27.210417 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:49:27.215544 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:49:27.229169 systemd-networkd[1210]: eth0: DHCPv4 address 172.24.4.62/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:49:27.232553 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 21:49:27.235147 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 21:49:27.239142 kernel: Console: switching to colour dummy device 80x25 Jan 13 21:49:27.241287 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 21:49:27.241341 kernel: [drm] features: -context_init Jan 13 21:49:27.243663 kernel: [drm] number of scanouts: 1 Jan 13 21:49:27.243725 kernel: [drm] number of cap sets: 0 Jan 13 21:49:27.246859 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 21:49:27.246375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:49:27.246759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:49:27.256245 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 21:49:27.256321 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 21:49:27.263745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:49:27.271834 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 21:49:27.277809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:49:27.278225 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:49:27.285417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:49:27.291598 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:49:27.302733 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:49:27.321763 lvm[1247]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:49:27.348258 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:49:27.348520 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:49:27.355235 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:49:27.361010 lvm[1252]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:49:27.367457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:49:27.391780 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:49:27.393544 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:49:27.395732 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:49:27.395777 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:49:27.395916 systemd[1]: Reached target machines.target - Containers. Jan 13 21:49:27.397673 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:49:27.404298 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:49:27.406560 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:49:27.409976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:49:27.422968 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:49:27.439863 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:49:27.456686 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:49:27.462529 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:49:27.483194 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 21:49:27.483070 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:49:27.509546 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:49:27.511760 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:49:27.541150 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:49:27.567125 kernel: loop1: detected capacity change from 0 to 211296 Jan 13 21:49:27.617624 kernel: loop2: detected capacity change from 0 to 8 Jan 13 21:49:27.642172 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:49:27.719408 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:49:27.775078 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 21:49:27.838077 kernel: loop6: detected capacity change from 0 to 8 Jan 13 21:49:27.844930 kernel: loop7: detected capacity change from 0 to 142488 Jan 13 21:49:27.887047 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 21:49:27.888366 (sd-merge)[1276]: Merged extensions into '/usr'. Jan 13 21:49:27.896777 systemd[1]: Reloading requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:49:27.896957 systemd[1]: Reloading... Jan 13 21:49:28.002132 zram_generator::config[1301]: No configuration found. Jan 13 21:49:28.202895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:49:28.278249 systemd[1]: Reloading finished in 378 ms. Jan 13 21:49:28.300175 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:49:28.311549 systemd[1]: Starting ensure-sysext.service... Jan 13 21:49:28.324385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:49:28.335338 systemd[1]: Reloading requested from client PID 1365 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:49:28.335464 systemd[1]: Reloading... Jan 13 21:49:28.381762 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:49:28.382195 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:49:28.383181 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:49:28.383535 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Jan 13 21:49:28.383621 systemd-tmpfiles[1366]: ACLs are not supported, ignoring. Jan 13 21:49:28.387236 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:49:28.387242 systemd-tmpfiles[1366]: Skipping /boot Jan 13 21:49:28.397725 systemd-tmpfiles[1366]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:49:28.397743 systemd-tmpfiles[1366]: Skipping /boot Jan 13 21:49:28.433256 ldconfig[1259]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:49:28.457370 zram_generator::config[1396]: No configuration found. Jan 13 21:49:28.459234 systemd-networkd[1210]: eth0: Gained IPv6LL Jan 13 21:49:28.620883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:49:28.687640 systemd[1]: Reloading finished in 351 ms. Jan 13 21:49:28.706882 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:49:28.709733 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:49:28.725765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:49:28.740354 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:49:28.752377 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:49:28.769410 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:49:28.786451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:49:28.799298 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:49:28.808327 augenrules[1484]: No rules Jan 13 21:49:28.811609 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:49:28.822386 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:49:28.822628 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:49:28.824165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:49:28.832228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:49:28.851336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:49:28.856588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:49:28.856797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:49:28.864180 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:49:28.873027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:49:28.873363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:49:28.882477 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:49:28.882699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:49:28.893448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:49:28.894290 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:49:28.913176 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:49:28.921039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:49:28.921762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:49:28.928415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:49:28.939373 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:49:28.947287 systemd-resolved[1478]: Positive Trust Anchors: Jan 13 21:49:28.947298 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:49:28.947343 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:49:28.949352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:49:28.961256 systemd-resolved[1478]: Using system hostname 'ci-4081-3-0-d-9566454817.novalocal'. Jan 13 21:49:28.966272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:49:28.969197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:49:28.974674 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:49:28.977313 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:49:28.977939 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:49:28.996721 systemd[1]: Finished ensure-sysext.service. Jan 13 21:49:28.997964 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:49:29.001529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:49:29.001729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:49:29.005746 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:49:29.006022 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:49:29.006987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:49:29.008222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:49:29.010562 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:49:29.013314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:49:29.016944 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:49:29.024751 systemd[1]: Reached target network.target - Network. Jan 13 21:49:29.027707 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:49:29.029985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:49:29.032068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:49:29.032169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:49:29.038285 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:49:29.041787 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:49:29.103490 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:49:29.107077 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:49:29.108895 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:49:29.111146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:49:29.113384 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:49:29.115671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:49:29.115706 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:49:29.117989 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:49:29.120054 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:49:29.121419 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:49:29.122481 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:49:29.124796 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:49:29.127931 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:49:29.131651 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:49:29.140064 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:49:29.142473 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:49:29.144947 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:49:29.147500 systemd[1]: System is tainted: cgroupsv1 Jan 13 21:49:29.147617 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:49:29.147715 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:49:29.150471 systemd-timesyncd[1526]: Contacted time server 82.66.40.79:123 (0.flatcar.pool.ntp.org). Jan 13 21:49:29.150553 systemd-timesyncd[1526]: Initial clock synchronization to Mon 2025-01-13 21:49:29.491939 UTC. Jan 13 21:49:29.158262 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:49:29.165264 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:49:29.175354 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:49:29.181253 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:49:29.193339 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:49:29.199629 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:49:29.206785 jq[1534]: false Jan 13 21:49:29.213161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:49:29.225356 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:49:29.232783 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:49:29.245208 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:49:29.252914 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:49:29.261416 dbus-daemon[1533]: [system] SELinux support is enabled Jan 13 21:49:29.268331 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:49:29.270490 extend-filesystems[1537]: Found loop4 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found loop5 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found loop6 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found loop7 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found vda Jan 13 21:49:29.270490 extend-filesystems[1537]: Found vda1 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found vda2 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found vda3 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found usr Jan 13 21:49:29.270490 extend-filesystems[1537]: Found vda4 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found vda6 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found vda7 Jan 13 21:49:29.270490 extend-filesystems[1537]: Found vda9 Jan 13 21:49:29.270490 extend-filesystems[1537]: Checking size of /dev/vda9 Jan 13 21:49:29.290357 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:49:29.304540 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:49:29.316238 extend-filesystems[1537]: Resized partition /dev/vda9 Jan 13 21:49:29.316316 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:49:29.331148 extend-filesystems[1569]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:49:29.330237 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:49:29.339036 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:49:29.352417 update_engine[1567]: I20250113 21:49:29.352325 1567 main.cc:92] Flatcar Update Engine starting Jan 13 21:49:29.358995 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 21:49:29.359085 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:49:29.359471 update_engine[1567]: I20250113 21:49:29.359207 1567 update_check_scheduler.cc:74] Next update check in 8m55s Jan 13 21:49:29.359480 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:49:29.370793 jq[1571]: true Jan 13 21:49:29.375606 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:49:29.375872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:49:29.384494 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:49:29.398485 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 21:49:29.464619 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1199) Jan 13 21:49:29.405726 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:49:29.405996 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:49:29.464947 jq[1579]: true Jan 13 21:49:29.445160 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:49:29.478068 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:49:29.478068 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:49:29.478068 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 21:49:29.502940 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Jan 13 21:49:29.496647 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:49:29.496904 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:49:29.515501 tar[1577]: linux-amd64/helm Jan 13 21:49:29.521621 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:49:29.526775 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:49:29.528619 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:49:29.528649 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:49:29.529156 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:49:29.529176 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:49:29.534913 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:49:29.542285 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:49:29.661284 bash[1611]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:49:29.665741 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:49:29.680295 systemd[1]: Starting sshkeys.service... Jan 13 21:49:29.693479 systemd-logind[1560]: New seat seat0. Jan 13 21:49:29.700286 systemd-logind[1560]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:49:29.700309 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:49:29.700645 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:49:29.731351 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:49:29.731675 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:49:29.746060 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:49:29.780388 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:49:29.804749 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:49:29.817826 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:49:29.829430 systemd[1]: Started sshd@0-172.24.4.62:22-172.24.4.1:55048.service - OpenSSH per-connection server daemon (172.24.4.1:55048). Jan 13 21:49:29.857124 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:49:29.857457 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:49:29.867741 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:49:29.913818 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:49:29.931512 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:49:29.948897 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:49:29.954168 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:49:30.036747 containerd[1580]: time="2025-01-13T21:49:30.036599738Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:49:30.082684 containerd[1580]: time="2025-01-13T21:49:30.082338483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:49:30.090653 containerd[1580]: time="2025-01-13T21:49:30.090605439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:49:30.090834 containerd[1580]: time="2025-01-13T21:49:30.090767613Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:49:30.091161 containerd[1580]: time="2025-01-13T21:49:30.090893558Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:49:30.091230 containerd[1580]: time="2025-01-13T21:49:30.091073888Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:49:30.091296 containerd[1580]: time="2025-01-13T21:49:30.091277869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:49:30.091485 containerd[1580]: time="2025-01-13T21:49:30.091464299Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:49:30.091614 containerd[1580]: time="2025-01-13T21:49:30.091538857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:49:30.091997 containerd[1580]: time="2025-01-13T21:49:30.091876670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:49:30.091997 containerd[1580]: time="2025-01-13T21:49:30.091898963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:49:30.091997 containerd[1580]: time="2025-01-13T21:49:30.091917245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:49:30.091997 containerd[1580]: time="2025-01-13T21:49:30.091930575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:49:30.092346 containerd[1580]: time="2025-01-13T21:49:30.092216593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:49:30.092703 containerd[1580]: time="2025-01-13T21:49:30.092566357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:49:30.093182 containerd[1580]: time="2025-01-13T21:49:30.092827158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:49:30.093182 containerd[1580]: time="2025-01-13T21:49:30.092849441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:49:30.093182 containerd[1580]: time="2025-01-13T21:49:30.092942290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:49:30.093182 containerd[1580]: time="2025-01-13T21:49:30.093008449Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:49:30.103179 containerd[1580]: time="2025-01-13T21:49:30.103153439Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:49:30.103353 containerd[1580]: time="2025-01-13T21:49:30.103334208Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:49:30.103421 containerd[1580]: time="2025-01-13T21:49:30.103406896Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:49:30.103549 containerd[1580]: time="2025-01-13T21:49:30.103532423Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:49:30.103617 containerd[1580]: time="2025-01-13T21:49:30.103603533Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:49:30.103805 containerd[1580]: time="2025-01-13T21:49:30.103786641Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:49:30.105388 containerd[1580]: time="2025-01-13T21:49:30.105014153Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:49:30.105388 containerd[1580]: time="2025-01-13T21:49:30.105343087Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:49:30.105568 containerd[1580]: time="2025-01-13T21:49:30.105482737Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:49:30.105568 containerd[1580]: time="2025-01-13T21:49:30.105513847Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:49:30.105651 containerd[1580]: time="2025-01-13T21:49:30.105531471Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:49:30.105736 containerd[1580]: time="2025-01-13T21:49:30.105700550Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:49:30.105907 containerd[1580]: time="2025-01-13T21:49:30.105790579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:49:30.106003 containerd[1580]: time="2025-01-13T21:49:30.105967409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:49:30.106164 containerd[1580]: time="2025-01-13T21:49:30.106085393Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:49:30.106164 containerd[1580]: time="2025-01-13T21:49:30.106128997Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:49:30.106329 containerd[1580]: time="2025-01-13T21:49:30.106146862Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:49:30.106329 containerd[1580]: time="2025-01-13T21:49:30.106274749Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:49:30.106479 containerd[1580]: time="2025-01-13T21:49:30.106309672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.106479 containerd[1580]: time="2025-01-13T21:49:30.106417618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.106479 containerd[1580]: time="2025-01-13T21:49:30.106438144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.106479 containerd[1580]: time="2025-01-13T21:49:30.106456207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.106681 containerd[1580]: time="2025-01-13T21:49:30.106616250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.106681 containerd[1580]: time="2025-01-13T21:49:30.106648352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.106836 containerd[1580]: time="2025-01-13T21:49:30.106664774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.106836 containerd[1580]: time="2025-01-13T21:49:30.106778413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.106836 containerd[1580]: time="2025-01-13T21:49:30.106797247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.107008 containerd[1580]: time="2025-01-13T21:49:30.106820502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.107008 containerd[1580]: time="2025-01-13T21:49:30.106951033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.107008 containerd[1580]: time="2025-01-13T21:49:30.106966703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.107008 containerd[1580]: time="2025-01-13T21:49:30.106983731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.107291 containerd[1580]: time="2025-01-13T21:49:30.107149748Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:49:30.107291 containerd[1580]: time="2025-01-13T21:49:30.107189269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.107291 containerd[1580]: time="2025-01-13T21:49:30.107223210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.107291 containerd[1580]: time="2025-01-13T21:49:30.107239036Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:49:30.107542 containerd[1580]: time="2025-01-13T21:49:30.107432727Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:49:30.107542 containerd[1580]: time="2025-01-13T21:49:30.107459857Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:49:30.107639 containerd[1580]: time="2025-01-13T21:49:30.107621875Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:49:30.107789 containerd[1580]: time="2025-01-13T21:49:30.107715299Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:49:30.107789 containerd[1580]: time="2025-01-13T21:49:30.107734551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.107789 containerd[1580]: time="2025-01-13T21:49:30.107749177Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:49:30.107789 containerd[1580]: time="2025-01-13T21:49:30.107765850Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:49:30.107983 containerd[1580]: time="2025-01-13T21:49:30.107905688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:49:30.108785 containerd[1580]: time="2025-01-13T21:49:30.108575350Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:49:30.108785 containerd[1580]: time="2025-01-13T21:49:30.108696927Z" level=info msg="Connect containerd service" Jan 13 21:49:30.108785 containerd[1580]: time="2025-01-13T21:49:30.108732978Z" level=info msg="using legacy CRI server" Jan 13 21:49:30.108785 containerd[1580]: time="2025-01-13T21:49:30.108741096Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:49:30.109389 containerd[1580]: time="2025-01-13T21:49:30.109194114Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:49:30.110294 containerd[1580]: time="2025-01-13T21:49:30.110184811Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:49:30.110717 containerd[1580]: time="2025-01-13T21:49:30.110516679Z" level=info msg="Start subscribing containerd event" Jan 13 21:49:30.110876 containerd[1580]: time="2025-01-13T21:49:30.110843000Z" level=info msg="Start recovering state" Jan 13 21:49:30.111483 containerd[1580]: time="2025-01-13T21:49:30.111253042Z" level=info msg="Start event monitor" Jan 13 21:49:30.111483 containerd[1580]: time="2025-01-13T21:49:30.111282616Z" level=info msg="Start snapshots syncer" Jan 13 21:49:30.111483 containerd[1580]: time="2025-01-13T21:49:30.111293763Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:49:30.111483 containerd[1580]: time="2025-01-13T21:49:30.111303656Z" level=info msg="Start streaming server" Jan 13 21:49:30.112244 containerd[1580]: time="2025-01-13T21:49:30.112223754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:49:30.113250 containerd[1580]: time="2025-01-13T21:49:30.113231405Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:49:30.113991 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:49:30.114403 containerd[1580]: time="2025-01-13T21:49:30.114032516Z" level=info msg="containerd successfully booted in 0.078737s" Jan 13 21:49:30.368689 tar[1577]: linux-amd64/LICENSE Jan 13 21:49:30.368689 tar[1577]: linux-amd64/README.md Jan 13 21:49:30.381283 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:49:31.101744 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 55048 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:31.103776 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:31.118580 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:49:31.118943 systemd-logind[1560]: New session 1 of user core. Jan 13 21:49:31.133573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:49:31.154340 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:49:31.164585 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:49:31.176673 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:49:31.278454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:49:31.292527 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:49:31.322087 systemd[1664]: Queued start job for default target default.target. Jan 13 21:49:31.322696 systemd[1664]: Created slice app.slice - User Application Slice. Jan 13 21:49:31.322718 systemd[1664]: Reached target paths.target - Paths. Jan 13 21:49:31.322733 systemd[1664]: Reached target timers.target - Timers. Jan 13 21:49:31.334266 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:49:31.340696 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:49:31.341364 systemd[1664]: Reached target sockets.target - Sockets. Jan 13 21:49:31.341382 systemd[1664]: Reached target basic.target - Basic System. Jan 13 21:49:31.341427 systemd[1664]: Reached target default.target - Main User Target. Jan 13 21:49:31.341453 systemd[1664]: Startup finished in 158ms. Jan 13 21:49:31.343078 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:49:31.352555 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:49:31.788885 systemd[1]: Started sshd@1-172.24.4.62:22-172.24.4.1:55052.service - OpenSSH per-connection server daemon (172.24.4.1:55052). Jan 13 21:49:32.652988 kubelet[1677]: E0113 21:49:32.652901 1677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:49:32.658789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:49:32.658973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:49:33.682498 sshd[1689]: Accepted publickey for core from 172.24.4.1 port 55052 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:33.685753 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:33.697681 systemd-logind[1560]: New session 2 of user core. Jan 13 21:49:33.710201 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:49:34.318499 sshd[1689]: pam_unix(sshd:session): session closed for user core Jan 13 21:49:34.332005 systemd[1]: Started sshd@2-172.24.4.62:22-172.24.4.1:36032.service - OpenSSH per-connection server daemon (172.24.4.1:36032). Jan 13 21:49:34.348276 systemd[1]: sshd@1-172.24.4.62:22-172.24.4.1:55052.service: Deactivated successfully. Jan 13 21:49:34.354510 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:49:34.357413 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:49:34.361841 systemd-logind[1560]: Removed session 2. Jan 13 21:49:34.996502 login[1646]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:49:35.011227 systemd-logind[1560]: New session 3 of user core. Jan 13 21:49:35.016427 login[1647]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:49:35.019579 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:49:35.036728 systemd-logind[1560]: New session 4 of user core. Jan 13 21:49:35.047989 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:49:35.779112 sshd[1700]: Accepted publickey for core from 172.24.4.1 port 36032 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:35.782205 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:35.792566 systemd-logind[1560]: New session 5 of user core. Jan 13 21:49:35.801962 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:49:36.283624 coreos-metadata[1532]: Jan 13 21:49:36.283 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:49:36.332691 coreos-metadata[1532]: Jan 13 21:49:36.332 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 21:49:36.429710 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 13 21:49:36.436329 systemd[1]: sshd@2-172.24.4.62:22-172.24.4.1:36032.service: Deactivated successfully. Jan 13 21:49:36.443563 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:49:36.444602 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:49:36.447290 systemd-logind[1560]: Removed session 5. Jan 13 21:49:36.522567 coreos-metadata[1532]: Jan 13 21:49:36.522 INFO Fetch successful Jan 13 21:49:36.522567 coreos-metadata[1532]: Jan 13 21:49:36.522 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:49:36.540066 coreos-metadata[1532]: Jan 13 21:49:36.539 INFO Fetch successful Jan 13 21:49:36.540066 coreos-metadata[1532]: Jan 13 21:49:36.539 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 21:49:36.553563 coreos-metadata[1532]: Jan 13 21:49:36.553 INFO Fetch successful Jan 13 21:49:36.553563 coreos-metadata[1532]: Jan 13 21:49:36.553 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 21:49:36.567255 coreos-metadata[1532]: Jan 13 21:49:36.567 INFO Fetch successful Jan 13 21:49:36.567255 coreos-metadata[1532]: Jan 13 21:49:36.567 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 21:49:36.580727 coreos-metadata[1532]: Jan 13 21:49:36.580 INFO Fetch successful Jan 13 21:49:36.580942 coreos-metadata[1532]: Jan 13 21:49:36.580 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 21:49:36.594435 coreos-metadata[1532]: Jan 13 21:49:36.594 INFO Fetch successful Jan 13 21:49:36.645681 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:49:36.649070 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:49:36.871372 coreos-metadata[1619]: Jan 13 21:49:36.870 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:49:36.916053 coreos-metadata[1619]: Jan 13 21:49:36.915 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 21:49:36.932488 coreos-metadata[1619]: Jan 13 21:49:36.932 INFO Fetch successful Jan 13 21:49:36.932488 coreos-metadata[1619]: Jan 13 21:49:36.932 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:49:36.946070 coreos-metadata[1619]: Jan 13 21:49:36.945 INFO Fetch successful Jan 13 21:49:36.951376 unknown[1619]: wrote ssh authorized keys file for user: core Jan 13 21:49:37.001051 update-ssh-keys[1750]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:49:37.002583 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:49:37.018309 systemd[1]: Finished sshkeys.service. Jan 13 21:49:37.021950 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:49:37.023520 systemd[1]: Startup finished in 17.330s (kernel) + 12.152s (userspace) = 29.482s. Jan 13 21:49:42.694882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:49:42.706568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:49:43.036981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:49:43.041210 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:49:43.263403 kubelet[1769]: E0113 21:49:43.263260 1769 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:49:43.272787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:49:43.274079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:49:46.550777 systemd[1]: Started sshd@3-172.24.4.62:22-172.24.4.1:34048.service - OpenSSH per-connection server daemon (172.24.4.1:34048). Jan 13 21:49:47.720189 sshd[1778]: Accepted publickey for core from 172.24.4.1 port 34048 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:47.723269 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:47.735254 systemd-logind[1560]: New session 6 of user core. Jan 13 21:49:47.742675 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:49:48.339503 sshd[1778]: pam_unix(sshd:session): session closed for user core Jan 13 21:49:48.349770 systemd[1]: Started sshd@4-172.24.4.62:22-172.24.4.1:34064.service - OpenSSH per-connection server daemon (172.24.4.1:34064). Jan 13 21:49:48.350918 systemd[1]: sshd@3-172.24.4.62:22-172.24.4.1:34048.service: Deactivated successfully. Jan 13 21:49:48.367872 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:49:48.369838 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:49:48.373693 systemd-logind[1560]: Removed session 6. Jan 13 21:49:49.732944 sshd[1783]: Accepted publickey for core from 172.24.4.1 port 34064 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:49.736075 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:49.747790 systemd-logind[1560]: New session 7 of user core. Jan 13 21:49:49.757651 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:49:50.339495 sshd[1783]: pam_unix(sshd:session): session closed for user core Jan 13 21:49:50.351692 systemd[1]: Started sshd@5-172.24.4.62:22-172.24.4.1:34080.service - OpenSSH per-connection server daemon (172.24.4.1:34080). Jan 13 21:49:50.352733 systemd[1]: sshd@4-172.24.4.62:22-172.24.4.1:34064.service: Deactivated successfully. Jan 13 21:49:50.363028 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:49:50.366895 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:49:50.370524 systemd-logind[1560]: Removed session 7. Jan 13 21:49:51.594803 sshd[1791]: Accepted publickey for core from 172.24.4.1 port 34080 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:51.596781 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:51.601376 systemd-logind[1560]: New session 8 of user core. Jan 13 21:49:51.609499 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:49:52.375873 sshd[1791]: pam_unix(sshd:session): session closed for user core Jan 13 21:49:52.386647 systemd[1]: Started sshd@6-172.24.4.62:22-172.24.4.1:34082.service - OpenSSH per-connection server daemon (172.24.4.1:34082). Jan 13 21:49:52.387606 systemd[1]: sshd@5-172.24.4.62:22-172.24.4.1:34080.service: Deactivated successfully. Jan 13 21:49:52.401428 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:49:52.405759 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:49:52.408311 systemd-logind[1560]: Removed session 8. Jan 13 21:49:53.444797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:49:53.454470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:49:53.614308 sshd[1799]: Accepted publickey for core from 172.24.4.1 port 34082 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:53.618325 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:53.628886 systemd-logind[1560]: New session 9 of user core. Jan 13 21:49:53.639623 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:49:53.782365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:49:53.783780 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:49:53.883377 kubelet[1818]: E0113 21:49:53.883067 1818 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:49:53.887253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:49:53.887540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:49:54.077439 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:49:54.079058 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:49:54.097580 sudo[1828]: pam_unix(sudo:session): session closed for user root Jan 13 21:49:54.363616 sshd[1799]: pam_unix(sshd:session): session closed for user core Jan 13 21:49:54.384310 systemd[1]: Started sshd@7-172.24.4.62:22-172.24.4.1:43730.service - OpenSSH per-connection server daemon (172.24.4.1:43730). Jan 13 21:49:54.388621 systemd[1]: sshd@6-172.24.4.62:22-172.24.4.1:34082.service: Deactivated successfully. Jan 13 21:49:54.402526 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:49:54.404973 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:49:54.408186 systemd-logind[1560]: Removed session 9. Jan 13 21:49:55.729067 sshd[1830]: Accepted publickey for core from 172.24.4.1 port 43730 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:55.732074 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:55.743531 systemd-logind[1560]: New session 10 of user core. Jan 13 21:49:55.750935 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:49:56.064311 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:49:56.065082 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:49:56.073057 sudo[1838]: pam_unix(sudo:session): session closed for user root Jan 13 21:49:56.084547 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:49:56.085266 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:49:56.112868 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:49:56.118576 auditctl[1841]: No rules Jan 13 21:49:56.119404 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:49:56.119904 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:49:56.135671 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:49:56.182869 augenrules[1860]: No rules Jan 13 21:49:56.185058 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:49:56.189300 sudo[1837]: pam_unix(sudo:session): session closed for user root Jan 13 21:49:56.351447 sshd[1830]: pam_unix(sshd:session): session closed for user core Jan 13 21:49:56.362788 systemd[1]: Started sshd@8-172.24.4.62:22-172.24.4.1:43742.service - OpenSSH per-connection server daemon (172.24.4.1:43742). Jan 13 21:49:56.363870 systemd[1]: sshd@7-172.24.4.62:22-172.24.4.1:43730.service: Deactivated successfully. Jan 13 21:49:56.373155 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:49:56.373976 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:49:56.385252 systemd-logind[1560]: Removed session 10. Jan 13 21:49:57.529747 sshd[1866]: Accepted publickey for core from 172.24.4.1 port 43742 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:49:57.532642 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:49:57.542721 systemd-logind[1560]: New session 11 of user core. Jan 13 21:49:57.553010 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:49:58.095631 sudo[1873]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:49:58.096087 sudo[1873]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:49:58.830411 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:49:58.831490 (dockerd)[1889]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:49:59.850376 dockerd[1889]: time="2025-01-13T21:49:59.850262364Z" level=info msg="Starting up" Jan 13 21:50:00.041579 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport99461499-merged.mount: Deactivated successfully. Jan 13 21:50:00.294405 dockerd[1889]: time="2025-01-13T21:50:00.293826252Z" level=info msg="Loading containers: start." Jan 13 21:50:00.479137 kernel: Initializing XFRM netlink socket Jan 13 21:50:00.585993 systemd-networkd[1210]: docker0: Link UP Jan 13 21:50:00.612559 dockerd[1889]: time="2025-01-13T21:50:00.612492455Z" level=info msg="Loading containers: done." Jan 13 21:50:00.650885 dockerd[1889]: time="2025-01-13T21:50:00.650742494Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:50:00.651234 dockerd[1889]: time="2025-01-13T21:50:00.651025354Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:50:00.651366 dockerd[1889]: time="2025-01-13T21:50:00.651308745Z" level=info msg="Daemon has completed initialization" Jan 13 21:50:00.733169 dockerd[1889]: time="2025-01-13T21:50:00.732907601Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:50:00.733543 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:50:01.040331 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck342196160-merged.mount: Deactivated successfully. Jan 13 21:50:02.508945 containerd[1580]: time="2025-01-13T21:50:02.508449043Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:50:03.307186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount620421574.mount: Deactivated successfully. Jan 13 21:50:03.943897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:50:03.950184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:50:04.069420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:50:04.070319 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:50:04.132563 kubelet[2092]: E0113 21:50:04.132508 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:50:04.137776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:50:04.137983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:50:05.452396 containerd[1580]: time="2025-01-13T21:50:05.452120343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:05.453566 containerd[1580]: time="2025-01-13T21:50:05.453513279Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 13 21:50:05.454571 containerd[1580]: time="2025-01-13T21:50:05.454503913Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:05.458137 containerd[1580]: time="2025-01-13T21:50:05.458023561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:05.459715 containerd[1580]: time="2025-01-13T21:50:05.459487590Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.950968952s" Jan 13 21:50:05.459715 containerd[1580]: time="2025-01-13T21:50:05.459531844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:50:05.484803 containerd[1580]: time="2025-01-13T21:50:05.484765398Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:50:07.822936 containerd[1580]: time="2025-01-13T21:50:07.822817148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:07.824433 containerd[1580]: time="2025-01-13T21:50:07.824216390Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 13 21:50:07.825628 containerd[1580]: time="2025-01-13T21:50:07.825580042Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:07.829921 containerd[1580]: time="2025-01-13T21:50:07.829868001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:07.831336 containerd[1580]: time="2025-01-13T21:50:07.830886201Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.345912809s" Jan 13 21:50:07.831336 containerd[1580]: time="2025-01-13T21:50:07.830918592Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:50:07.854888 containerd[1580]: time="2025-01-13T21:50:07.854616549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:50:09.397201 containerd[1580]: time="2025-01-13T21:50:09.396412643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:09.398580 containerd[1580]: time="2025-01-13T21:50:09.398352058Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 13 21:50:09.399846 containerd[1580]: time="2025-01-13T21:50:09.399786835Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:09.403142 containerd[1580]: time="2025-01-13T21:50:09.403050540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:09.404366 containerd[1580]: time="2025-01-13T21:50:09.404199884Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.549537351s" Jan 13 21:50:09.404366 containerd[1580]: time="2025-01-13T21:50:09.404257114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:50:09.430519 containerd[1580]: time="2025-01-13T21:50:09.430381223Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:50:10.779868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902568726.mount: Deactivated successfully. Jan 13 21:50:11.336618 containerd[1580]: time="2025-01-13T21:50:11.336475726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:11.344238 containerd[1580]: time="2025-01-13T21:50:11.343906854Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 13 21:50:11.353140 containerd[1580]: time="2025-01-13T21:50:11.351373436Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:11.357633 containerd[1580]: time="2025-01-13T21:50:11.357494643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:11.360058 containerd[1580]: time="2025-01-13T21:50:11.359977099Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.92954026s" Jan 13 21:50:11.360228 containerd[1580]: time="2025-01-13T21:50:11.360054562Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:50:11.412905 containerd[1580]: time="2025-01-13T21:50:11.412817017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:50:12.038694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4166028944.mount: Deactivated successfully. Jan 13 21:50:13.226369 containerd[1580]: time="2025-01-13T21:50:13.226291252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:13.228021 containerd[1580]: time="2025-01-13T21:50:13.227974375Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 21:50:13.229722 containerd[1580]: time="2025-01-13T21:50:13.229656508Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:13.233430 containerd[1580]: time="2025-01-13T21:50:13.233371195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:13.235884 containerd[1580]: time="2025-01-13T21:50:13.235826485Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.821974521s" Jan 13 21:50:13.236212 containerd[1580]: time="2025-01-13T21:50:13.236038929Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:50:13.265539 containerd[1580]: time="2025-01-13T21:50:13.265321134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:50:13.850420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151339515.mount: Deactivated successfully. Jan 13 21:50:13.860044 containerd[1580]: time="2025-01-13T21:50:13.859954227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:13.862937 containerd[1580]: time="2025-01-13T21:50:13.862812554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 21:50:13.863955 containerd[1580]: time="2025-01-13T21:50:13.863811597Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:13.870000 containerd[1580]: time="2025-01-13T21:50:13.869858158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:13.873701 containerd[1580]: time="2025-01-13T21:50:13.873290591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 607.903152ms" Jan 13 21:50:13.873701 containerd[1580]: time="2025-01-13T21:50:13.873402641Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:50:13.928812 containerd[1580]: time="2025-01-13T21:50:13.928594459Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:50:14.194711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 21:50:14.202434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:50:14.548016 update_engine[1567]: I20250113 21:50:14.547333 1567 update_attempter.cc:509] Updating boot flags... Jan 13 21:50:14.831153 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2207) Jan 13 21:50:14.883285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:50:14.887980 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:50:14.938700 kubelet[2221]: E0113 21:50:14.938619 2221 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:50:14.942500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:50:14.942682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:50:14.969142 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2204) Jan 13 21:50:15.291772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463438258.mount: Deactivated successfully. Jan 13 21:50:18.984136 containerd[1580]: time="2025-01-13T21:50:18.984025078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:18.986040 containerd[1580]: time="2025-01-13T21:50:18.985973436Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 13 21:50:18.987252 containerd[1580]: time="2025-01-13T21:50:18.987197100Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:18.991258 containerd[1580]: time="2025-01-13T21:50:18.991176571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:50:18.992835 containerd[1580]: time="2025-01-13T21:50:18.992575446Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.063940504s" Jan 13 21:50:18.992835 containerd[1580]: time="2025-01-13T21:50:18.992618832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:50:23.745178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:50:23.754675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:50:23.793077 systemd[1]: Reloading requested from client PID 2343 ('systemctl') (unit session-11.scope)... Jan 13 21:50:23.793111 systemd[1]: Reloading... Jan 13 21:50:23.897142 zram_generator::config[2385]: No configuration found. Jan 13 21:50:24.104012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:50:24.187937 systemd[1]: Reloading finished in 394 ms. Jan 13 21:50:24.230998 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:50:24.231123 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:50:24.231509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:50:24.245527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:50:24.371933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:50:24.380539 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:50:24.467127 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:50:24.467127 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:50:24.467127 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:50:24.467127 kubelet[2457]: I0113 21:50:24.466477 2457 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:50:25.182187 kubelet[2457]: I0113 21:50:25.181886 2457 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:50:25.182187 kubelet[2457]: I0113 21:50:25.181920 2457 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:50:25.182187 kubelet[2457]: I0113 21:50:25.182204 2457 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:50:25.233502 kubelet[2457]: E0113 21:50:25.233421 2457 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.235953 kubelet[2457]: I0113 21:50:25.235691 2457 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:50:25.296898 kubelet[2457]: I0113 21:50:25.296846 2457 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:50:25.320159 kubelet[2457]: I0113 21:50:25.319208 2457 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:50:25.320159 kubelet[2457]: I0113 21:50:25.319648 2457 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:50:25.320159 kubelet[2457]: I0113 21:50:25.319719 2457 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:50:25.320159 kubelet[2457]: I0113 21:50:25.319744 2457 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:50:25.320159 kubelet[2457]: I0113 21:50:25.319995 2457 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:50:25.321504 kubelet[2457]: I0113 21:50:25.320831 2457 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:50:25.321678 kubelet[2457]: I0113 21:50:25.321655 2457 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:50:25.321843 kubelet[2457]: I0113 21:50:25.321821 2457 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:50:25.321980 kubelet[2457]: I0113 21:50:25.321960 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:50:25.327351 kubelet[2457]: W0113 21:50:25.321802 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-9566454817.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.328148 kubelet[2457]: E0113 21:50:25.327538 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-9566454817.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.332739 kubelet[2457]: W0113 21:50:25.332561 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.332901 kubelet[2457]: E0113 21:50:25.332827 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.333245 kubelet[2457]: I0113 21:50:25.333180 2457 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:50:25.342326 kubelet[2457]: I0113 21:50:25.342272 2457 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:50:25.342511 kubelet[2457]: W0113 21:50:25.342478 2457 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:50:25.349598 kubelet[2457]: I0113 21:50:25.348649 2457 server.go:1256] "Started kubelet" Jan 13 21:50:25.349598 kubelet[2457]: I0113 21:50:25.349348 2457 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:50:25.349598 kubelet[2457]: I0113 21:50:25.350836 2457 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:50:25.358486 kubelet[2457]: I0113 21:50:25.358419 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:50:25.359147 kubelet[2457]: I0113 21:50:25.359068 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:50:25.359677 kubelet[2457]: I0113 21:50:25.359646 2457 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:50:25.364375 kubelet[2457]: E0113 21:50:25.364341 2457 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.62:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.62:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-d-9566454817.novalocal.181a5ef7f4043c25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-d-9566454817.novalocal,UID:ci-4081-3-0-d-9566454817.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-d-9566454817.novalocal,},FirstTimestamp:2025-01-13 21:50:25.348598821 +0000 UTC m=+0.959532245,LastTimestamp:2025-01-13 21:50:25.348598821 +0000 UTC m=+0.959532245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-d-9566454817.novalocal,}" Jan 13 21:50:25.369373 kubelet[2457]: I0113 21:50:25.368980 2457 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:50:25.373629 kubelet[2457]: E0113 21:50:25.373593 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-9566454817.novalocal?timeout=10s\": dial tcp 172.24.4.62:6443: connect: connection refused" interval="200ms" Jan 13 21:50:25.377138 kubelet[2457]: W0113 21:50:25.377015 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.377824 kubelet[2457]: I0113 21:50:25.377791 2457 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:50:25.378071 kubelet[2457]: I0113 21:50:25.378046 2457 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:50:25.378304 kubelet[2457]: E0113 21:50:25.378192 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.378574 kubelet[2457]: I0113 21:50:25.378524 2457 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:50:25.378832 kubelet[2457]: I0113 21:50:25.378767 2457 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:50:25.381557 kubelet[2457]: E0113 21:50:25.380837 2457 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:50:25.382159 kubelet[2457]: I0113 21:50:25.382022 2457 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:50:25.407763 kubelet[2457]: I0113 21:50:25.407628 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:50:25.408719 kubelet[2457]: I0113 21:50:25.408694 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:50:25.408772 kubelet[2457]: I0113 21:50:25.408726 2457 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:50:25.408772 kubelet[2457]: I0113 21:50:25.408747 2457 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:50:25.408833 kubelet[2457]: E0113 21:50:25.408792 2457 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:50:25.416517 kubelet[2457]: W0113 21:50:25.416317 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.416517 kubelet[2457]: E0113 21:50:25.416393 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:25.424804 kubelet[2457]: I0113 21:50:25.424781 2457 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:50:25.424960 kubelet[2457]: I0113 21:50:25.424938 2457 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:50:25.425059 kubelet[2457]: I0113 21:50:25.425049 2457 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:50:25.429695 kubelet[2457]: I0113 21:50:25.429681 2457 policy_none.go:49] "None policy: Start" Jan 13 21:50:25.430670 kubelet[2457]: I0113 21:50:25.430392 2457 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:50:25.430670 kubelet[2457]: I0113 21:50:25.430425 2457 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:50:25.439034 kubelet[2457]: I0113 21:50:25.437683 2457 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:50:25.439034 kubelet[2457]: I0113 21:50:25.437944 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:50:25.441624 kubelet[2457]: E0113 21:50:25.441604 2457 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-d-9566454817.novalocal\" not found" Jan 13 21:50:25.472766 kubelet[2457]: I0113 21:50:25.472699 2457 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.473383 kubelet[2457]: E0113 21:50:25.473336 2457 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.62:6443/api/v1/nodes\": dial tcp 172.24.4.62:6443: connect: connection refused" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.509061 kubelet[2457]: I0113 21:50:25.508951 2457 topology_manager.go:215] "Topology Admit Handler" podUID="909c75a1601dbf57dbfb6521049cad81" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.512582 kubelet[2457]: I0113 21:50:25.512539 2457 topology_manager.go:215] "Topology Admit Handler" podUID="14b97e9aa51a5643bfd86a69cca8975a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.518693 kubelet[2457]: I0113 21:50:25.518597 2457 topology_manager.go:215] "Topology Admit Handler" podUID="4aef24faa18832fa047c136297de1d5f" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.575641 kubelet[2457]: E0113 21:50:25.575550 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-9566454817.novalocal?timeout=10s\": dial tcp 172.24.4.62:6443: connect: connection refused" interval="400ms" Jan 13 21:50:25.677271 kubelet[2457]: I0113 21:50:25.676995 2457 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.678159 kubelet[2457]: E0113 21:50:25.678010 2457 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.62:6443/api/v1/nodes\": dial tcp 172.24.4.62:6443: connect: connection refused" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679190 kubelet[2457]: I0113 21:50:25.679054 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/909c75a1601dbf57dbfb6521049cad81-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"909c75a1601dbf57dbfb6521049cad81\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679320 kubelet[2457]: I0113 21:50:25.679222 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679320 kubelet[2457]: I0113 21:50:25.679292 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679458 kubelet[2457]: I0113 21:50:25.679361 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679458 kubelet[2457]: I0113 21:50:25.679421 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/909c75a1601dbf57dbfb6521049cad81-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"909c75a1601dbf57dbfb6521049cad81\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679582 kubelet[2457]: I0113 21:50:25.679480 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679582 kubelet[2457]: I0113 21:50:25.679538 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679698 kubelet[2457]: I0113 21:50:25.679598 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4aef24faa18832fa047c136297de1d5f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"4aef24faa18832fa047c136297de1d5f\") " pod="kube-system/kube-scheduler-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.679698 kubelet[2457]: I0113 21:50:25.679658 2457 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/909c75a1601dbf57dbfb6521049cad81-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"909c75a1601dbf57dbfb6521049cad81\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:25.832751 containerd[1580]: time="2025-01-13T21:50:25.832147854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-d-9566454817.novalocal,Uid:909c75a1601dbf57dbfb6521049cad81,Namespace:kube-system,Attempt:0,}" Jan 13 21:50:25.839453 containerd[1580]: time="2025-01-13T21:50:25.838930495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal,Uid:14b97e9aa51a5643bfd86a69cca8975a,Namespace:kube-system,Attempt:0,}" Jan 13 21:50:25.841931 containerd[1580]: time="2025-01-13T21:50:25.841872629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-d-9566454817.novalocal,Uid:4aef24faa18832fa047c136297de1d5f,Namespace:kube-system,Attempt:0,}" Jan 13 21:50:25.976812 kubelet[2457]: E0113 21:50:25.976760 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-9566454817.novalocal?timeout=10s\": dial tcp 172.24.4.62:6443: connect: connection refused" interval="800ms" Jan 13 21:50:26.082032 kubelet[2457]: I0113 21:50:26.081902 2457 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:26.082724 kubelet[2457]: E0113 21:50:26.082465 2457 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.62:6443/api/v1/nodes\": dial tcp 172.24.4.62:6443: connect: connection refused" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:26.313680 kubelet[2457]: W0113 21:50:26.313526 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:26.313680 kubelet[2457]: E0113 21:50:26.313670 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:26.430833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165362839.mount: Deactivated successfully. Jan 13 21:50:26.472247 containerd[1580]: time="2025-01-13T21:50:26.472038514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:50:26.482135 containerd[1580]: time="2025-01-13T21:50:26.481983024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 21:50:26.483396 kubelet[2457]: W0113 21:50:26.483241 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:26.484984 kubelet[2457]: E0113 21:50:26.483662 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:26.485077 containerd[1580]: time="2025-01-13T21:50:26.484036071Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:50:26.488860 containerd[1580]: time="2025-01-13T21:50:26.488500733Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:50:26.491856 containerd[1580]: time="2025-01-13T21:50:26.491755369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:50:26.496899 containerd[1580]: time="2025-01-13T21:50:26.496011410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:50:26.499090 containerd[1580]: time="2025-01-13T21:50:26.498809567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:50:26.502971 containerd[1580]: time="2025-01-13T21:50:26.502781514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:50:26.505580 containerd[1580]: time="2025-01-13T21:50:26.504929321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.610274ms" Jan 13 21:50:26.518713 containerd[1580]: time="2025-01-13T21:50:26.518600039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.532691ms" Jan 13 21:50:26.524168 containerd[1580]: time="2025-01-13T21:50:26.523933395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.67911ms" Jan 13 21:50:26.541702 kubelet[2457]: W0113 21:50:26.541246 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-9566454817.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:26.541702 kubelet[2457]: E0113 21:50:26.541428 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-9566454817.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:26.778825 kubelet[2457]: E0113 21:50:26.777912 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-9566454817.novalocal?timeout=10s\": dial tcp 172.24.4.62:6443: connect: connection refused" interval="1.6s" Jan 13 21:50:26.794449 containerd[1580]: time="2025-01-13T21:50:26.794224600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:50:26.795144 containerd[1580]: time="2025-01-13T21:50:26.794846819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:50:26.798610 containerd[1580]: time="2025-01-13T21:50:26.798214084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:50:26.798610 containerd[1580]: time="2025-01-13T21:50:26.798267959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:50:26.798610 containerd[1580]: time="2025-01-13T21:50:26.798296470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:26.798610 containerd[1580]: time="2025-01-13T21:50:26.798458804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:26.798610 containerd[1580]: time="2025-01-13T21:50:26.797342104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:26.798610 containerd[1580]: time="2025-01-13T21:50:26.798293653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:26.805345 containerd[1580]: time="2025-01-13T21:50:26.804448673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:50:26.805345 containerd[1580]: time="2025-01-13T21:50:26.804510504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:50:26.805345 containerd[1580]: time="2025-01-13T21:50:26.804529815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:26.805345 containerd[1580]: time="2025-01-13T21:50:26.804612981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:26.889130 containerd[1580]: time="2025-01-13T21:50:26.887662987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-d-9566454817.novalocal,Uid:909c75a1601dbf57dbfb6521049cad81,Namespace:kube-system,Attempt:0,} returns sandbox id \"962dc4ebfc263df8d54705027e1c7e2ab71d3b164c94de6faa5c549a29d5027d\"" Jan 13 21:50:26.894256 kubelet[2457]: I0113 21:50:26.893946 2457 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:26.894888 kubelet[2457]: E0113 21:50:26.894654 2457 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.62:6443/api/v1/nodes\": dial tcp 172.24.4.62:6443: connect: connection refused" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:26.901963 containerd[1580]: time="2025-01-13T21:50:26.901671911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal,Uid:14b97e9aa51a5643bfd86a69cca8975a,Namespace:kube-system,Attempt:0,} returns sandbox id \"918b483b1d584de598630a56e362509149f61d6bc9d4fef1570bcb7c1e3bc486\"" Jan 13 21:50:26.903373 containerd[1580]: time="2025-01-13T21:50:26.903327134Z" level=info msg="CreateContainer within sandbox \"962dc4ebfc263df8d54705027e1c7e2ab71d3b164c94de6faa5c549a29d5027d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:50:26.908226 containerd[1580]: time="2025-01-13T21:50:26.908189782Z" level=info msg="CreateContainer within sandbox \"918b483b1d584de598630a56e362509149f61d6bc9d4fef1570bcb7c1e3bc486\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:50:26.915885 containerd[1580]: time="2025-01-13T21:50:26.915835063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-d-9566454817.novalocal,Uid:4aef24faa18832fa047c136297de1d5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1320ba7ff9d48e891c7bb7937011c1363e68ea93b54da8aa0b03d5c862a2cbc7\"" Jan 13 21:50:26.918190 containerd[1580]: time="2025-01-13T21:50:26.918034530Z" level=info msg="CreateContainer within sandbox \"1320ba7ff9d48e891c7bb7937011c1363e68ea93b54da8aa0b03d5c862a2cbc7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:50:26.935968 kubelet[2457]: W0113 21:50:26.935917 2457 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:26.936145 kubelet[2457]: E0113 21:50:26.936125 2457 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:27.203773 containerd[1580]: time="2025-01-13T21:50:27.203717309Z" level=info msg="CreateContainer within sandbox \"918b483b1d584de598630a56e362509149f61d6bc9d4fef1570bcb7c1e3bc486\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7ec299c528571ea255d556b3d56fe7f16fe9bdd502e932c74770b4b7b530d07c\"" Jan 13 21:50:27.205088 containerd[1580]: time="2025-01-13T21:50:27.205003536Z" level=info msg="StartContainer for \"7ec299c528571ea255d556b3d56fe7f16fe9bdd502e932c74770b4b7b530d07c\"" Jan 13 21:50:27.206750 containerd[1580]: time="2025-01-13T21:50:27.206630954Z" level=info msg="CreateContainer within sandbox \"962dc4ebfc263df8d54705027e1c7e2ab71d3b164c94de6faa5c549a29d5027d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4d4123b8a5b9074e8c86f6992f88fab2ed1a2160bfdc914bbc5f76693a5eca4\"" Jan 13 21:50:27.207626 containerd[1580]: time="2025-01-13T21:50:27.207415080Z" level=info msg="StartContainer for \"a4d4123b8a5b9074e8c86f6992f88fab2ed1a2160bfdc914bbc5f76693a5eca4\"" Jan 13 21:50:27.211013 containerd[1580]: time="2025-01-13T21:50:27.210834001Z" level=info msg="CreateContainer within sandbox \"1320ba7ff9d48e891c7bb7937011c1363e68ea93b54da8aa0b03d5c862a2cbc7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3909920fbf7d10f2d21eb7e20493370eb41dd6f57837e1dbf7e2e5c3fdefba7\"" Jan 13 21:50:27.218164 containerd[1580]: time="2025-01-13T21:50:27.216196848Z" level=info msg="StartContainer for \"e3909920fbf7d10f2d21eb7e20493370eb41dd6f57837e1dbf7e2e5c3fdefba7\"" Jan 13 21:50:27.323693 kubelet[2457]: E0113 21:50:27.323653 2457 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.62:6443: connect: connection refused Jan 13 21:50:27.360357 containerd[1580]: time="2025-01-13T21:50:27.360288516Z" level=info msg="StartContainer for \"7ec299c528571ea255d556b3d56fe7f16fe9bdd502e932c74770b4b7b530d07c\" returns successfully" Jan 13 21:50:27.360515 containerd[1580]: time="2025-01-13T21:50:27.360466472Z" level=info msg="StartContainer for \"a4d4123b8a5b9074e8c86f6992f88fab2ed1a2160bfdc914bbc5f76693a5eca4\" returns successfully" Jan 13 21:50:27.379251 containerd[1580]: time="2025-01-13T21:50:27.379152299Z" level=info msg="StartContainer for \"e3909920fbf7d10f2d21eb7e20493370eb41dd6f57837e1dbf7e2e5c3fdefba7\" returns successfully" Jan 13 21:50:28.499792 kubelet[2457]: I0113 21:50:28.498943 2457 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:29.607931 kubelet[2457]: E0113 21:50:29.607866 2457 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-d-9566454817.novalocal\" not found" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:29.672594 kubelet[2457]: I0113 21:50:29.672473 2457 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:30.331621 kubelet[2457]: I0113 21:50:30.330724 2457 apiserver.go:52] "Watching apiserver" Jan 13 21:50:30.379209 kubelet[2457]: I0113 21:50:30.379048 2457 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:50:32.941492 systemd[1]: Reloading requested from client PID 2731 ('systemctl') (unit session-11.scope)... Jan 13 21:50:32.941533 systemd[1]: Reloading... Jan 13 21:50:33.053131 zram_generator::config[2770]: No configuration found. Jan 13 21:50:33.217809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:50:33.306193 systemd[1]: Reloading finished in 363 ms. Jan 13 21:50:33.342752 kubelet[2457]: I0113 21:50:33.342687 2457 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:50:33.343257 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:50:33.360028 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:50:33.360369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:50:33.367676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:50:33.592508 (kubelet)[2844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:50:33.593300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:50:33.670998 kubelet[2844]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:50:33.670998 kubelet[2844]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:50:33.670998 kubelet[2844]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:50:33.672622 kubelet[2844]: I0113 21:50:33.672365 2844 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:50:33.686879 kubelet[2844]: I0113 21:50:33.686847 2844 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:50:33.687036 kubelet[2844]: I0113 21:50:33.687013 2844 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:50:33.687414 kubelet[2844]: I0113 21:50:33.687401 2844 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:50:33.689360 kubelet[2844]: I0113 21:50:33.689332 2844 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:50:33.693408 kubelet[2844]: I0113 21:50:33.693352 2844 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:50:33.707405 kubelet[2844]: I0113 21:50:33.707360 2844 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:50:33.708185 kubelet[2844]: I0113 21:50:33.708170 2844 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:50:33.708466 kubelet[2844]: I0113 21:50:33.708444 2844 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:50:33.708596 kubelet[2844]: I0113 21:50:33.708585 2844 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:50:33.708651 kubelet[2844]: I0113 21:50:33.708643 2844 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:50:33.708779 kubelet[2844]: I0113 21:50:33.708728 2844 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:50:33.709398 kubelet[2844]: I0113 21:50:33.709369 2844 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:50:33.709398 kubelet[2844]: I0113 21:50:33.709393 2844 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:50:33.711868 kubelet[2844]: I0113 21:50:33.711441 2844 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:50:33.711868 kubelet[2844]: I0113 21:50:33.711465 2844 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:50:33.713593 kubelet[2844]: I0113 21:50:33.713437 2844 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:50:33.713635 kubelet[2844]: I0113 21:50:33.713624 2844 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:50:33.714186 kubelet[2844]: I0113 21:50:33.714045 2844 server.go:1256] "Started kubelet" Jan 13 21:50:33.720647 kubelet[2844]: I0113 21:50:33.720609 2844 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:50:33.724767 kubelet[2844]: I0113 21:50:33.724545 2844 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:50:33.732266 kubelet[2844]: I0113 21:50:33.731140 2844 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:50:33.733068 kubelet[2844]: I0113 21:50:33.733053 2844 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:50:33.733347 kubelet[2844]: I0113 21:50:33.733334 2844 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:50:33.736184 kubelet[2844]: I0113 21:50:33.736171 2844 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:50:33.737621 kubelet[2844]: I0113 21:50:33.737476 2844 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:50:33.740679 kubelet[2844]: I0113 21:50:33.740204 2844 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:50:33.750271 kubelet[2844]: I0113 21:50:33.746951 2844 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:50:33.750271 kubelet[2844]: I0113 21:50:33.747091 2844 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:50:33.757794 kubelet[2844]: I0113 21:50:33.757001 2844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:50:33.761135 kubelet[2844]: I0113 21:50:33.760293 2844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:50:33.761135 kubelet[2844]: I0113 21:50:33.760386 2844 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:50:33.761135 kubelet[2844]: I0113 21:50:33.760471 2844 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:50:33.761135 kubelet[2844]: E0113 21:50:33.760531 2844 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:50:33.772173 kubelet[2844]: I0113 21:50:33.771021 2844 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:50:33.788237 kubelet[2844]: E0113 21:50:33.788214 2844 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:50:33.789675 sudo[2872]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:50:33.789979 sudo[2872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:50:33.844188 kubelet[2844]: I0113 21:50:33.841915 2844 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:33.861196 kubelet[2844]: I0113 21:50:33.859837 2844 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:33.861196 kubelet[2844]: I0113 21:50:33.859927 2844 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:33.863684 kubelet[2844]: E0113 21:50:33.863227 2844 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:50:33.880068 kubelet[2844]: I0113 21:50:33.880045 2844 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:50:33.882172 kubelet[2844]: I0113 21:50:33.880244 2844 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:50:33.882357 kubelet[2844]: I0113 21:50:33.882345 2844 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:50:33.882600 kubelet[2844]: I0113 21:50:33.882587 2844 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:50:33.882722 kubelet[2844]: I0113 21:50:33.882675 2844 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:50:33.882722 kubelet[2844]: I0113 21:50:33.882688 2844 policy_none.go:49] "None policy: Start" Jan 13 21:50:33.883653 kubelet[2844]: I0113 21:50:33.883627 2844 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:50:33.883721 kubelet[2844]: I0113 21:50:33.883672 2844 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:50:33.884994 kubelet[2844]: I0113 21:50:33.884969 2844 state_mem.go:75] "Updated machine memory state" Jan 13 21:50:33.886446 kubelet[2844]: I0113 21:50:33.886426 2844 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:50:33.888915 kubelet[2844]: I0113 21:50:33.887465 2844 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:50:34.066556 kubelet[2844]: I0113 21:50:34.063576 2844 topology_manager.go:215] "Topology Admit Handler" podUID="909c75a1601dbf57dbfb6521049cad81" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.066556 kubelet[2844]: I0113 21:50:34.063678 2844 topology_manager.go:215] "Topology Admit Handler" podUID="14b97e9aa51a5643bfd86a69cca8975a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.066556 kubelet[2844]: I0113 21:50:34.063719 2844 topology_manager.go:215] "Topology Admit Handler" podUID="4aef24faa18832fa047c136297de1d5f" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.074848 kubelet[2844]: W0113 21:50:34.074594 2844 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:50:34.079804 kubelet[2844]: W0113 21:50:34.079514 2844 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:50:34.080421 kubelet[2844]: W0113 21:50:34.080169 2844 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:50:34.143385 kubelet[2844]: I0113 21:50:34.143257 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.144025 kubelet[2844]: I0113 21:50:34.143929 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.144025 kubelet[2844]: I0113 21:50:34.143991 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4aef24faa18832fa047c136297de1d5f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"4aef24faa18832fa047c136297de1d5f\") " pod="kube-system/kube-scheduler-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.144120 kubelet[2844]: I0113 21:50:34.144034 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/909c75a1601dbf57dbfb6521049cad81-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"909c75a1601dbf57dbfb6521049cad81\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.144120 kubelet[2844]: I0113 21:50:34.144076 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/909c75a1601dbf57dbfb6521049cad81-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"909c75a1601dbf57dbfb6521049cad81\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.144189 kubelet[2844]: I0113 21:50:34.144177 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.144216 kubelet[2844]: I0113 21:50:34.144206 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/909c75a1601dbf57dbfb6521049cad81-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"909c75a1601dbf57dbfb6521049cad81\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.144304 kubelet[2844]: I0113 21:50:34.144282 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.144362 kubelet[2844]: I0113 21:50:34.144344 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14b97e9aa51a5643bfd86a69cca8975a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal\" (UID: \"14b97e9aa51a5643bfd86a69cca8975a\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.463930 sudo[2872]: pam_unix(sudo:session): session closed for user root Jan 13 21:50:34.713598 kubelet[2844]: I0113 21:50:34.713324 2844 apiserver.go:52] "Watching apiserver" Jan 13 21:50:34.738442 kubelet[2844]: I0113 21:50:34.738226 2844 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:50:34.829582 kubelet[2844]: W0113 21:50:34.829506 2844 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:50:34.831149 kubelet[2844]: E0113 21:50:34.829781 2844 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-d-9566454817.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" Jan 13 21:50:34.896073 kubelet[2844]: I0113 21:50:34.896015 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-d-9566454817.novalocal" podStartSLOduration=0.895926607 podStartE2EDuration="895.926607ms" podCreationTimestamp="2025-01-13 21:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:50:34.878989288 +0000 UTC m=+1.281138578" watchObservedRunningTime="2025-01-13 21:50:34.895926607 +0000 UTC m=+1.298075897" Jan 13 21:50:34.910940 kubelet[2844]: I0113 21:50:34.910792 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-d-9566454817.novalocal" podStartSLOduration=0.910753139 podStartE2EDuration="910.753139ms" podCreationTimestamp="2025-01-13 21:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:50:34.897325688 +0000 UTC m=+1.299474978" watchObservedRunningTime="2025-01-13 21:50:34.910753139 +0000 UTC m=+1.312902379" Jan 13 21:50:36.462010 sudo[1873]: pam_unix(sudo:session): session closed for user root Jan 13 21:50:36.744289 sshd[1866]: pam_unix(sshd:session): session closed for user core Jan 13 21:50:36.750844 systemd[1]: sshd@8-172.24.4.62:22-172.24.4.1:43742.service: Deactivated successfully. Jan 13 21:50:36.759836 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:50:36.760451 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:50:36.764590 systemd-logind[1560]: Removed session 11. Jan 13 21:50:38.745254 kubelet[2844]: I0113 21:50:38.744286 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-d-9566454817.novalocal" podStartSLOduration=4.744144587 podStartE2EDuration="4.744144587s" podCreationTimestamp="2025-01-13 21:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:50:34.91157914 +0000 UTC m=+1.313728380" watchObservedRunningTime="2025-01-13 21:50:38.744144587 +0000 UTC m=+5.146293887" Jan 13 21:50:46.408126 kubelet[2844]: I0113 21:50:46.407017 2844 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:50:46.408557 containerd[1580]: time="2025-01-13T21:50:46.407352683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:50:46.409960 kubelet[2844]: I0113 21:50:46.409632 2844 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:50:46.461091 kubelet[2844]: I0113 21:50:46.461044 2844 topology_manager.go:215] "Topology Admit Handler" podUID="a2eb62f2-d93f-46aa-a2c0-eff2f5c53264" podNamespace="kube-system" podName="kube-proxy-gbfnr" Jan 13 21:50:46.490697 kubelet[2844]: I0113 21:50:46.490653 2844 topology_manager.go:215] "Topology Admit Handler" podUID="ab32633d-1989-46c7-a9f8-25caed4c696b" podNamespace="kube-system" podName="cilium-brdbd" Jan 13 21:50:46.501413 kubelet[2844]: W0113 21:50:46.501072 2844 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-0-d-9566454817.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-d-9566454817.novalocal' and this object Jan 13 21:50:46.501413 kubelet[2844]: E0113 21:50:46.501135 2844 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-0-d-9566454817.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-d-9566454817.novalocal' and this object Jan 13 21:50:46.501413 kubelet[2844]: W0113 21:50:46.501196 2844 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-d-9566454817.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-d-9566454817.novalocal' and this object Jan 13 21:50:46.501413 kubelet[2844]: E0113 21:50:46.501211 2844 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-d-9566454817.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-d-9566454817.novalocal' and this object Jan 13 21:50:46.502418 kubelet[2844]: W0113 21:50:46.501991 2844 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-d-9566454817.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-d-9566454817.novalocal' and this object Jan 13 21:50:46.502418 kubelet[2844]: E0113 21:50:46.502017 2844 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-d-9566454817.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-d-9566454817.novalocal' and this object Jan 13 21:50:46.527568 kubelet[2844]: I0113 21:50:46.526905 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-xtables-lock\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.527568 kubelet[2844]: I0113 21:50:46.526953 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-host-proc-sys-kernel\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.527568 kubelet[2844]: I0113 21:50:46.526985 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2eb62f2-d93f-46aa-a2c0-eff2f5c53264-kube-proxy\") pod \"kube-proxy-gbfnr\" (UID: \"a2eb62f2-d93f-46aa-a2c0-eff2f5c53264\") " pod="kube-system/kube-proxy-gbfnr" Jan 13 21:50:46.527568 kubelet[2844]: I0113 21:50:46.527016 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-cgroup\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.527568 kubelet[2844]: I0113 21:50:46.527042 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2eb62f2-d93f-46aa-a2c0-eff2f5c53264-xtables-lock\") pod \"kube-proxy-gbfnr\" (UID: \"a2eb62f2-d93f-46aa-a2c0-eff2f5c53264\") " pod="kube-system/kube-proxy-gbfnr" Jan 13 21:50:46.527568 kubelet[2844]: I0113 21:50:46.527070 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2eb62f2-d93f-46aa-a2c0-eff2f5c53264-lib-modules\") pod \"kube-proxy-gbfnr\" (UID: \"a2eb62f2-d93f-46aa-a2c0-eff2f5c53264\") " pod="kube-system/kube-proxy-gbfnr" Jan 13 21:50:46.527908 kubelet[2844]: I0113 21:50:46.527112 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-bpf-maps\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.527908 kubelet[2844]: I0113 21:50:46.527139 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-etc-cni-netd\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.527908 kubelet[2844]: I0113 21:50:46.527166 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-hubble-tls\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.527908 kubelet[2844]: I0113 21:50:46.527194 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rb7\" (UniqueName: \"kubernetes.io/projected/a2eb62f2-d93f-46aa-a2c0-eff2f5c53264-kube-api-access-p7rb7\") pod \"kube-proxy-gbfnr\" (UID: \"a2eb62f2-d93f-46aa-a2c0-eff2f5c53264\") " pod="kube-system/kube-proxy-gbfnr" Jan 13 21:50:46.527908 kubelet[2844]: I0113 21:50:46.527218 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cni-path\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.527908 kubelet[2844]: I0113 21:50:46.527243 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab32633d-1989-46c7-a9f8-25caed4c696b-clustermesh-secrets\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.528078 kubelet[2844]: I0113 21:50:46.527268 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-host-proc-sys-net\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.528078 kubelet[2844]: I0113 21:50:46.527292 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-config-path\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.528078 kubelet[2844]: I0113 21:50:46.527319 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-run\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.528078 kubelet[2844]: I0113 21:50:46.527346 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-hostproc\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.528078 kubelet[2844]: I0113 21:50:46.527369 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-lib-modules\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.528078 kubelet[2844]: I0113 21:50:46.527419 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smnkz\" (UniqueName: \"kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-kube-api-access-smnkz\") pod \"cilium-brdbd\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " pod="kube-system/cilium-brdbd" Jan 13 21:50:46.574722 kubelet[2844]: I0113 21:50:46.569621 2844 topology_manager.go:215] "Topology Admit Handler" podUID="4fb08bff-c63e-45dc-b459-e2214fc25561" podNamespace="kube-system" podName="cilium-operator-5cc964979-br2lq" Jan 13 21:50:46.629079 kubelet[2844]: I0113 21:50:46.628018 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxcz\" (UniqueName: \"kubernetes.io/projected/4fb08bff-c63e-45dc-b459-e2214fc25561-kube-api-access-fmxcz\") pod \"cilium-operator-5cc964979-br2lq\" (UID: \"4fb08bff-c63e-45dc-b459-e2214fc25561\") " pod="kube-system/cilium-operator-5cc964979-br2lq" Jan 13 21:50:46.629079 kubelet[2844]: I0113 21:50:46.628060 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fb08bff-c63e-45dc-b459-e2214fc25561-cilium-config-path\") pod \"cilium-operator-5cc964979-br2lq\" (UID: \"4fb08bff-c63e-45dc-b459-e2214fc25561\") " pod="kube-system/cilium-operator-5cc964979-br2lq" Jan 13 21:50:46.768907 containerd[1580]: time="2025-01-13T21:50:46.768848071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gbfnr,Uid:a2eb62f2-d93f-46aa-a2c0-eff2f5c53264,Namespace:kube-system,Attempt:0,}" Jan 13 21:50:46.811300 containerd[1580]: time="2025-01-13T21:50:46.810705740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:50:46.811300 containerd[1580]: time="2025-01-13T21:50:46.811278300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:50:46.811566 containerd[1580]: time="2025-01-13T21:50:46.811322640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:46.811566 containerd[1580]: time="2025-01-13T21:50:46.811504875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:46.857410 containerd[1580]: time="2025-01-13T21:50:46.857378133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gbfnr,Uid:a2eb62f2-d93f-46aa-a2c0-eff2f5c53264,Namespace:kube-system,Attempt:0,} returns sandbox id \"c39957e6b4e593bc6a182f097933638e162bf183f2ff628f316617c744a73c21\"" Jan 13 21:50:46.861926 containerd[1580]: time="2025-01-13T21:50:46.861775920Z" level=info msg="CreateContainer within sandbox \"c39957e6b4e593bc6a182f097933638e162bf183f2ff628f316617c744a73c21\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:50:46.885674 containerd[1580]: time="2025-01-13T21:50:46.885623028Z" level=info msg="CreateContainer within sandbox \"c39957e6b4e593bc6a182f097933638e162bf183f2ff628f316617c744a73c21\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"74f760ef91fb4fd2267bc61b5642c4fb6a860eb0e39ce0177556f2639dcbe9d1\"" Jan 13 21:50:46.887082 containerd[1580]: time="2025-01-13T21:50:46.887055456Z" level=info msg="StartContainer for \"74f760ef91fb4fd2267bc61b5642c4fb6a860eb0e39ce0177556f2639dcbe9d1\"" Jan 13 21:50:46.944438 containerd[1580]: time="2025-01-13T21:50:46.944365847Z" level=info msg="StartContainer for \"74f760ef91fb4fd2267bc61b5642c4fb6a860eb0e39ce0177556f2639dcbe9d1\" returns successfully" Jan 13 21:50:47.630181 kubelet[2844]: E0113 21:50:47.630092 2844 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:50:47.630977 kubelet[2844]: E0113 21:50:47.630243 2844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-config-path podName:ab32633d-1989-46c7-a9f8-25caed4c696b nodeName:}" failed. No retries permitted until 2025-01-13 21:50:48.130204093 +0000 UTC m=+14.532353383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-config-path") pod "cilium-brdbd" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:50:47.630977 kubelet[2844]: E0113 21:50:47.630293 2844 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 21:50:47.630977 kubelet[2844]: E0113 21:50:47.630357 2844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab32633d-1989-46c7-a9f8-25caed4c696b-clustermesh-secrets podName:ab32633d-1989-46c7-a9f8-25caed4c696b nodeName:}" failed. No retries permitted until 2025-01-13 21:50:48.130336468 +0000 UTC m=+14.532485748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/ab32633d-1989-46c7-a9f8-25caed4c696b-clustermesh-secrets") pod "cilium-brdbd" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:50:47.631727 kubelet[2844]: E0113 21:50:47.631501 2844 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 21:50:47.631727 kubelet[2844]: E0113 21:50:47.631547 2844 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-brdbd: failed to sync secret cache: timed out waiting for the condition Jan 13 21:50:47.631727 kubelet[2844]: E0113 21:50:47.631674 2844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-hubble-tls podName:ab32633d-1989-46c7-a9f8-25caed4c696b nodeName:}" failed. No retries permitted until 2025-01-13 21:50:48.131635876 +0000 UTC m=+14.533785166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-hubble-tls") pod "cilium-brdbd" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:50:47.729177 kubelet[2844]: E0113 21:50:47.729076 2844 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:50:47.730217 kubelet[2844]: E0113 21:50:47.729269 2844 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4fb08bff-c63e-45dc-b459-e2214fc25561-cilium-config-path podName:4fb08bff-c63e-45dc-b459-e2214fc25561 nodeName:}" failed. No retries permitted until 2025-01-13 21:50:48.229223969 +0000 UTC m=+14.631373249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4fb08bff-c63e-45dc-b459-e2214fc25561-cilium-config-path") pod "cilium-operator-5cc964979-br2lq" (UID: "4fb08bff-c63e-45dc-b459-e2214fc25561") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:50:47.880523 kubelet[2844]: I0113 21:50:47.880343 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gbfnr" podStartSLOduration=1.879443339 podStartE2EDuration="1.879443339s" podCreationTimestamp="2025-01-13 21:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:50:47.879205963 +0000 UTC m=+14.281355253" watchObservedRunningTime="2025-01-13 21:50:47.879443339 +0000 UTC m=+14.281592629" Jan 13 21:50:48.298030 containerd[1580]: time="2025-01-13T21:50:48.297916282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brdbd,Uid:ab32633d-1989-46c7-a9f8-25caed4c696b,Namespace:kube-system,Attempt:0,}" Jan 13 21:50:48.358673 containerd[1580]: time="2025-01-13T21:50:48.357838982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:50:48.358673 containerd[1580]: time="2025-01-13T21:50:48.358070566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:50:48.358673 containerd[1580]: time="2025-01-13T21:50:48.358164354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:48.358673 containerd[1580]: time="2025-01-13T21:50:48.358387360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:48.383247 containerd[1580]: time="2025-01-13T21:50:48.382489697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-br2lq,Uid:4fb08bff-c63e-45dc-b459-e2214fc25561,Namespace:kube-system,Attempt:0,}" Jan 13 21:50:48.425314 containerd[1580]: time="2025-01-13T21:50:48.425271102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brdbd,Uid:ab32633d-1989-46c7-a9f8-25caed4c696b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\"" Jan 13 21:50:48.429143 containerd[1580]: time="2025-01-13T21:50:48.427748326Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:50:48.432891 containerd[1580]: time="2025-01-13T21:50:48.432762175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:50:48.432891 containerd[1580]: time="2025-01-13T21:50:48.432843367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:50:48.433234 containerd[1580]: time="2025-01-13T21:50:48.432865843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:48.433362 containerd[1580]: time="2025-01-13T21:50:48.433183359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:50:48.489817 containerd[1580]: time="2025-01-13T21:50:48.489748790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-br2lq,Uid:4fb08bff-c63e-45dc-b459-e2214fc25561,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a\"" Jan 13 21:50:53.172860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296787874.mount: Deactivated successfully. Jan 13 21:51:01.212839 containerd[1580]: time="2025-01-13T21:51:01.212699410Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:51:01.214617 containerd[1580]: time="2025-01-13T21:51:01.214423498Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734123" Jan 13 21:51:01.215943 containerd[1580]: time="2025-01-13T21:51:01.215871319Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:51:01.218271 containerd[1580]: time="2025-01-13T21:51:01.217691489Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.789902471s" Jan 13 21:51:01.218271 containerd[1580]: time="2025-01-13T21:51:01.217741849Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:51:01.219429 containerd[1580]: time="2025-01-13T21:51:01.219394515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:51:01.221164 containerd[1580]: time="2025-01-13T21:51:01.221123825Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:51:01.246393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454705578.mount: Deactivated successfully. Jan 13 21:51:01.250414 containerd[1580]: time="2025-01-13T21:51:01.250362093Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\"" Jan 13 21:51:01.253117 containerd[1580]: time="2025-01-13T21:51:01.252695399Z" level=info msg="StartContainer for \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\"" Jan 13 21:51:01.319505 containerd[1580]: time="2025-01-13T21:51:01.318025656Z" level=info msg="StartContainer for \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\" returns successfully" Jan 13 21:51:02.180451 containerd[1580]: time="2025-01-13T21:51:02.180130230Z" level=info msg="shim disconnected" id=640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e namespace=k8s.io Jan 13 21:51:02.180451 containerd[1580]: time="2025-01-13T21:51:02.180217703Z" level=warning msg="cleaning up after shim disconnected" id=640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e namespace=k8s.io Jan 13 21:51:02.180451 containerd[1580]: time="2025-01-13T21:51:02.180235799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:51:02.246910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e-rootfs.mount: Deactivated successfully. Jan 13 21:51:02.907789 containerd[1580]: time="2025-01-13T21:51:02.907685947Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:51:02.950664 containerd[1580]: time="2025-01-13T21:51:02.949702622Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\"" Jan 13 21:51:02.955854 containerd[1580]: time="2025-01-13T21:51:02.954402958Z" level=info msg="StartContainer for \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\"" Jan 13 21:51:03.029054 containerd[1580]: time="2025-01-13T21:51:03.027846583Z" level=info msg="StartContainer for \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\" returns successfully" Jan 13 21:51:03.036996 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:51:03.037455 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:51:03.037516 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:51:03.056406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:51:03.081855 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:51:03.085806 containerd[1580]: time="2025-01-13T21:51:03.085479102Z" level=info msg="shim disconnected" id=25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e namespace=k8s.io Jan 13 21:51:03.085806 containerd[1580]: time="2025-01-13T21:51:03.085578338Z" level=warning msg="cleaning up after shim disconnected" id=25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e namespace=k8s.io Jan 13 21:51:03.085806 containerd[1580]: time="2025-01-13T21:51:03.085589661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:51:03.244226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e-rootfs.mount: Deactivated successfully. Jan 13 21:51:03.922509 containerd[1580]: time="2025-01-13T21:51:03.921295400Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:51:03.980944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount388945942.mount: Deactivated successfully. Jan 13 21:51:03.984672 containerd[1580]: time="2025-01-13T21:51:03.984610807Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\"" Jan 13 21:51:03.985683 containerd[1580]: time="2025-01-13T21:51:03.985632120Z" level=info msg="StartContainer for \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\"" Jan 13 21:51:04.067553 containerd[1580]: time="2025-01-13T21:51:04.067521082Z" level=info msg="StartContainer for \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\" returns successfully" Jan 13 21:51:04.092310 containerd[1580]: time="2025-01-13T21:51:04.092178004Z" level=info msg="shim disconnected" id=c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99 namespace=k8s.io Jan 13 21:51:04.092310 containerd[1580]: time="2025-01-13T21:51:04.092240288Z" level=warning msg="cleaning up after shim disconnected" id=c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99 namespace=k8s.io Jan 13 21:51:04.092310 containerd[1580]: time="2025-01-13T21:51:04.092250338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:51:04.104900 containerd[1580]: time="2025-01-13T21:51:04.103851489Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:51:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:51:04.245046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99-rootfs.mount: Deactivated successfully. Jan 13 21:51:04.929034 containerd[1580]: time="2025-01-13T21:51:04.928315335Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:51:04.971003 containerd[1580]: time="2025-01-13T21:51:04.968611426Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\"" Jan 13 21:51:04.976344 containerd[1580]: time="2025-01-13T21:51:04.975274859Z" level=info msg="StartContainer for \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\"" Jan 13 21:51:05.059960 containerd[1580]: time="2025-01-13T21:51:05.059928443Z" level=info msg="StartContainer for \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\" returns successfully" Jan 13 21:51:05.080553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7-rootfs.mount: Deactivated successfully. Jan 13 21:51:05.088786 containerd[1580]: time="2025-01-13T21:51:05.088587917Z" level=info msg="shim disconnected" id=bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7 namespace=k8s.io Jan 13 21:51:05.088786 containerd[1580]: time="2025-01-13T21:51:05.088638326Z" level=warning msg="cleaning up after shim disconnected" id=bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7 namespace=k8s.io Jan 13 21:51:05.088786 containerd[1580]: time="2025-01-13T21:51:05.088647895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:51:05.941800 containerd[1580]: time="2025-01-13T21:51:05.939728681Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:51:05.993417 containerd[1580]: time="2025-01-13T21:51:05.993306094Z" level=info msg="CreateContainer within sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\"" Jan 13 21:51:05.996498 containerd[1580]: time="2025-01-13T21:51:05.996443225Z" level=info msg="StartContainer for \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\"" Jan 13 21:51:06.032896 systemd[1]: run-containerd-runc-k8s.io-04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b-runc.n522cf.mount: Deactivated successfully. Jan 13 21:51:06.077037 containerd[1580]: time="2025-01-13T21:51:06.076881584Z" level=info msg="StartContainer for \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\" returns successfully" Jan 13 21:51:06.158240 kubelet[2844]: I0113 21:51:06.158215 2844 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:51:06.199215 kubelet[2844]: I0113 21:51:06.198679 2844 topology_manager.go:215] "Topology Admit Handler" podUID="9c2636f8-d80d-4bc6-b7a5-c0a3dcbebde0" podNamespace="kube-system" podName="coredns-76f75df574-24dcm" Jan 13 21:51:06.202864 kubelet[2844]: I0113 21:51:06.201698 2844 topology_manager.go:215] "Topology Admit Handler" podUID="3d8308ec-3b3f-454d-8b1f-07a8d821e03a" podNamespace="kube-system" podName="coredns-76f75df574-ccmkj" Jan 13 21:51:06.282476 kubelet[2844]: I0113 21:51:06.282437 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c2636f8-d80d-4bc6-b7a5-c0a3dcbebde0-config-volume\") pod \"coredns-76f75df574-24dcm\" (UID: \"9c2636f8-d80d-4bc6-b7a5-c0a3dcbebde0\") " pod="kube-system/coredns-76f75df574-24dcm" Jan 13 21:51:06.282816 kubelet[2844]: I0113 21:51:06.282778 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvzjg\" (UniqueName: \"kubernetes.io/projected/3d8308ec-3b3f-454d-8b1f-07a8d821e03a-kube-api-access-cvzjg\") pod \"coredns-76f75df574-ccmkj\" (UID: \"3d8308ec-3b3f-454d-8b1f-07a8d821e03a\") " pod="kube-system/coredns-76f75df574-ccmkj" Jan 13 21:51:06.282969 kubelet[2844]: I0113 21:51:06.282905 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d8308ec-3b3f-454d-8b1f-07a8d821e03a-config-volume\") pod \"coredns-76f75df574-ccmkj\" (UID: \"3d8308ec-3b3f-454d-8b1f-07a8d821e03a\") " pod="kube-system/coredns-76f75df574-ccmkj" Jan 13 21:51:06.283111 kubelet[2844]: I0113 21:51:06.283020 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n865\" (UniqueName: \"kubernetes.io/projected/9c2636f8-d80d-4bc6-b7a5-c0a3dcbebde0-kube-api-access-4n865\") pod \"coredns-76f75df574-24dcm\" (UID: \"9c2636f8-d80d-4bc6-b7a5-c0a3dcbebde0\") " pod="kube-system/coredns-76f75df574-24dcm" Jan 13 21:51:06.521547 containerd[1580]: time="2025-01-13T21:51:06.521207417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ccmkj,Uid:3d8308ec-3b3f-454d-8b1f-07a8d821e03a,Namespace:kube-system,Attempt:0,}" Jan 13 21:51:06.526293 containerd[1580]: time="2025-01-13T21:51:06.526245766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-24dcm,Uid:9c2636f8-d80d-4bc6-b7a5-c0a3dcbebde0,Namespace:kube-system,Attempt:0,}" Jan 13 21:51:07.005515 kubelet[2844]: I0113 21:51:07.001364 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-brdbd" podStartSLOduration=8.210414983 podStartE2EDuration="21.001303359s" podCreationTimestamp="2025-01-13 21:50:46 +0000 UTC" firstStartedPulling="2025-01-13 21:50:48.427289306 +0000 UTC m=+14.829438556" lastFinishedPulling="2025-01-13 21:51:01.218177682 +0000 UTC m=+27.620326932" observedRunningTime="2025-01-13 21:51:06.982828365 +0000 UTC m=+33.384977635" watchObservedRunningTime="2025-01-13 21:51:07.001303359 +0000 UTC m=+33.403452619" Jan 13 21:51:07.448226 containerd[1580]: time="2025-01-13T21:51:07.448175391Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:51:07.449565 containerd[1580]: time="2025-01-13T21:51:07.449360125Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907265" Jan 13 21:51:07.450789 containerd[1580]: time="2025-01-13T21:51:07.450730258Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:51:07.452472 containerd[1580]: time="2025-01-13T21:51:07.452355453Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.232923676s" Jan 13 21:51:07.452472 containerd[1580]: time="2025-01-13T21:51:07.452387120Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:51:07.455230 containerd[1580]: time="2025-01-13T21:51:07.455176452Z" level=info msg="CreateContainer within sandbox \"7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:51:07.479216 containerd[1580]: time="2025-01-13T21:51:07.478966894Z" level=info msg="CreateContainer within sandbox \"7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\"" Jan 13 21:51:07.481156 containerd[1580]: time="2025-01-13T21:51:07.480796763Z" level=info msg="StartContainer for \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\"" Jan 13 21:51:07.551340 containerd[1580]: time="2025-01-13T21:51:07.551287325Z" level=info msg="StartContainer for \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\" returns successfully" Jan 13 21:51:07.973916 kubelet[2844]: I0113 21:51:07.973877 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-br2lq" podStartSLOduration=3.012129119 podStartE2EDuration="21.97382564s" podCreationTimestamp="2025-01-13 21:50:46 +0000 UTC" firstStartedPulling="2025-01-13 21:50:48.491309658 +0000 UTC m=+14.893458898" lastFinishedPulling="2025-01-13 21:51:07.453006169 +0000 UTC m=+33.855155419" observedRunningTime="2025-01-13 21:51:07.973709875 +0000 UTC m=+34.375859115" watchObservedRunningTime="2025-01-13 21:51:07.97382564 +0000 UTC m=+34.375974890" Jan 13 21:51:07.976778 systemd[1]: run-containerd-runc-k8s.io-9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d-runc.dVkOQN.mount: Deactivated successfully. Jan 13 21:51:11.338727 systemd-networkd[1210]: cilium_host: Link UP Jan 13 21:51:11.342053 systemd-networkd[1210]: cilium_net: Link UP Jan 13 21:51:11.342857 systemd-networkd[1210]: cilium_net: Gained carrier Jan 13 21:51:11.344234 systemd-networkd[1210]: cilium_host: Gained carrier Jan 13 21:51:11.448725 systemd-networkd[1210]: cilium_vxlan: Link UP Jan 13 21:51:11.448730 systemd-networkd[1210]: cilium_vxlan: Gained carrier Jan 13 21:51:11.667612 systemd-networkd[1210]: cilium_host: Gained IPv6LL Jan 13 21:51:11.746822 kernel: NET: Registered PF_ALG protocol family Jan 13 21:51:11.947230 systemd-networkd[1210]: cilium_net: Gained IPv6LL Jan 13 21:51:12.585145 systemd-networkd[1210]: lxc_health: Link UP Jan 13 21:51:12.590700 systemd-networkd[1210]: lxc_health: Gained carrier Jan 13 21:51:12.651285 systemd-networkd[1210]: cilium_vxlan: Gained IPv6LL Jan 13 21:51:13.146034 systemd-networkd[1210]: lxc7b5c6e035827: Link UP Jan 13 21:51:13.149848 kernel: eth0: renamed from tmpec94a Jan 13 21:51:13.156471 systemd-networkd[1210]: lxc7b5c6e035827: Gained carrier Jan 13 21:51:13.168504 systemd-networkd[1210]: lxcb1fc031acb31: Link UP Jan 13 21:51:13.175476 kernel: eth0: renamed from tmp039b1 Jan 13 21:51:13.185422 systemd-networkd[1210]: lxcb1fc031acb31: Gained carrier Jan 13 21:51:14.316262 systemd-networkd[1210]: lxc_health: Gained IPv6LL Jan 13 21:51:14.443275 systemd-networkd[1210]: lxc7b5c6e035827: Gained IPv6LL Jan 13 21:51:14.891437 systemd-networkd[1210]: lxcb1fc031acb31: Gained IPv6LL Jan 13 21:51:17.673056 containerd[1580]: time="2025-01-13T21:51:17.672956168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:51:17.673056 containerd[1580]: time="2025-01-13T21:51:17.673012861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:51:17.673056 containerd[1580]: time="2025-01-13T21:51:17.673027517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:51:17.673700 containerd[1580]: time="2025-01-13T21:51:17.673135744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:51:17.769499 containerd[1580]: time="2025-01-13T21:51:17.769256683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:51:17.769499 containerd[1580]: time="2025-01-13T21:51:17.769311023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:51:17.769499 containerd[1580]: time="2025-01-13T21:51:17.769338743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:51:17.769499 containerd[1580]: time="2025-01-13T21:51:17.769425130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:51:17.777698 containerd[1580]: time="2025-01-13T21:51:17.777665071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ccmkj,Uid:3d8308ec-3b3f-454d-8b1f-07a8d821e03a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec94aa1628bbe055999640e7f564df4a4736da8e08da8409c2930ea0cf41ea6a\"" Jan 13 21:51:17.784950 containerd[1580]: time="2025-01-13T21:51:17.784917112Z" level=info msg="CreateContainer within sandbox \"ec94aa1628bbe055999640e7f564df4a4736da8e08da8409c2930ea0cf41ea6a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:51:17.826034 containerd[1580]: time="2025-01-13T21:51:17.825799164Z" level=info msg="CreateContainer within sandbox \"ec94aa1628bbe055999640e7f564df4a4736da8e08da8409c2930ea0cf41ea6a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1bb2a9a52b942262f84aa69ed1ef728d5b3ff066583e2eeb0248ba73524ea9bb\"" Jan 13 21:51:17.828163 containerd[1580]: time="2025-01-13T21:51:17.827489554Z" level=info msg="StartContainer for \"1bb2a9a52b942262f84aa69ed1ef728d5b3ff066583e2eeb0248ba73524ea9bb\"" Jan 13 21:51:17.875704 containerd[1580]: time="2025-01-13T21:51:17.875438580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-24dcm,Uid:9c2636f8-d80d-4bc6-b7a5-c0a3dcbebde0,Namespace:kube-system,Attempt:0,} returns sandbox id \"039b102c407df083e12c65c1983f3116160046e008e71812ccd3501c4e7c6426\"" Jan 13 21:51:17.887759 containerd[1580]: time="2025-01-13T21:51:17.887710806Z" level=info msg="CreateContainer within sandbox \"039b102c407df083e12c65c1983f3116160046e008e71812ccd3501c4e7c6426\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:51:17.913230 containerd[1580]: time="2025-01-13T21:51:17.913198796Z" level=info msg="StartContainer for \"1bb2a9a52b942262f84aa69ed1ef728d5b3ff066583e2eeb0248ba73524ea9bb\" returns successfully" Jan 13 21:51:17.915917 containerd[1580]: time="2025-01-13T21:51:17.915744664Z" level=info msg="CreateContainer within sandbox \"039b102c407df083e12c65c1983f3116160046e008e71812ccd3501c4e7c6426\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"caf2e0bee6500e92d21c6b6dff0bde53347dbd72bce2e98c481d5dc9ea46f2de\"" Jan 13 21:51:17.916566 containerd[1580]: time="2025-01-13T21:51:17.916524414Z" level=info msg="StartContainer for \"caf2e0bee6500e92d21c6b6dff0bde53347dbd72bce2e98c481d5dc9ea46f2de\"" Jan 13 21:51:18.005907 containerd[1580]: time="2025-01-13T21:51:18.005392077Z" level=info msg="StartContainer for \"caf2e0bee6500e92d21c6b6dff0bde53347dbd72bce2e98c481d5dc9ea46f2de\" returns successfully" Jan 13 21:51:18.016007 kubelet[2844]: I0113 21:51:18.014493 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ccmkj" podStartSLOduration=32.014418394 podStartE2EDuration="32.014418394s" podCreationTimestamp="2025-01-13 21:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:51:18.010350779 +0000 UTC m=+44.412500019" watchObservedRunningTime="2025-01-13 21:51:18.014418394 +0000 UTC m=+44.416567644" Jan 13 21:51:19.054201 kubelet[2844]: I0113 21:51:19.050952 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-24dcm" podStartSLOduration=33.050865689 podStartE2EDuration="33.050865689s" podCreationTimestamp="2025-01-13 21:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:51:19.02006103 +0000 UTC m=+45.422210320" watchObservedRunningTime="2025-01-13 21:51:19.050865689 +0000 UTC m=+45.453015029" Jan 13 21:52:03.051653 systemd[1]: Started sshd@9-172.24.4.62:22-172.24.4.1:40516.service - OpenSSH per-connection server daemon (172.24.4.1:40516). Jan 13 21:52:04.623038 sshd[4214]: Accepted publickey for core from 172.24.4.1 port 40516 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:04.625871 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:04.652238 systemd-logind[1560]: New session 12 of user core. Jan 13 21:52:04.659253 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:52:05.398928 sshd[4214]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:05.403502 systemd[1]: sshd@9-172.24.4.62:22-172.24.4.1:40516.service: Deactivated successfully. Jan 13 21:52:05.411148 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:52:05.415521 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:52:05.418589 systemd-logind[1560]: Removed session 12. Jan 13 21:52:10.414715 systemd[1]: Started sshd@10-172.24.4.62:22-172.24.4.1:53970.service - OpenSSH per-connection server daemon (172.24.4.1:53970). Jan 13 21:52:11.599359 sshd[4229]: Accepted publickey for core from 172.24.4.1 port 53970 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:11.602040 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:11.614019 systemd-logind[1560]: New session 13 of user core. Jan 13 21:52:11.620453 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:52:12.418984 sshd[4229]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:12.426602 systemd[1]: sshd@10-172.24.4.62:22-172.24.4.1:53970.service: Deactivated successfully. Jan 13 21:52:12.434978 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:52:12.435873 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:52:12.438957 systemd-logind[1560]: Removed session 13. Jan 13 21:52:17.430618 systemd[1]: Started sshd@11-172.24.4.62:22-172.24.4.1:38224.service - OpenSSH per-connection server daemon (172.24.4.1:38224). Jan 13 21:52:18.609607 sshd[4246]: Accepted publickey for core from 172.24.4.1 port 38224 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:18.612341 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:18.621553 systemd-logind[1560]: New session 14 of user core. Jan 13 21:52:18.631604 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:52:19.347516 sshd[4246]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:19.354208 systemd[1]: sshd@11-172.24.4.62:22-172.24.4.1:38224.service: Deactivated successfully. Jan 13 21:52:19.363602 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:52:19.368294 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:52:19.370455 systemd-logind[1560]: Removed session 14. Jan 13 21:52:24.361276 systemd[1]: Started sshd@12-172.24.4.62:22-172.24.4.1:35210.service - OpenSSH per-connection server daemon (172.24.4.1:35210). Jan 13 21:52:25.684026 sshd[4260]: Accepted publickey for core from 172.24.4.1 port 35210 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:25.687191 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:25.698438 systemd-logind[1560]: New session 15 of user core. Jan 13 21:52:25.703636 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:52:26.701447 sshd[4260]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:26.715195 systemd[1]: Started sshd@13-172.24.4.62:22-172.24.4.1:35226.service - OpenSSH per-connection server daemon (172.24.4.1:35226). Jan 13 21:52:26.717082 systemd[1]: sshd@12-172.24.4.62:22-172.24.4.1:35210.service: Deactivated successfully. Jan 13 21:52:26.726704 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:52:26.732737 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:52:26.736068 systemd-logind[1560]: Removed session 15. Jan 13 21:52:28.023525 sshd[4274]: Accepted publickey for core from 172.24.4.1 port 35226 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:28.026389 sshd[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:28.036768 systemd-logind[1560]: New session 16 of user core. Jan 13 21:52:28.045610 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:52:28.855254 sshd[4274]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:28.860241 systemd[1]: sshd@13-172.24.4.62:22-172.24.4.1:35226.service: Deactivated successfully. Jan 13 21:52:28.864641 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:52:28.866269 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:52:28.875634 systemd[1]: Started sshd@14-172.24.4.62:22-172.24.4.1:35240.service - OpenSSH per-connection server daemon (172.24.4.1:35240). Jan 13 21:52:28.877279 systemd-logind[1560]: Removed session 16. Jan 13 21:52:30.164545 sshd[4288]: Accepted publickey for core from 172.24.4.1 port 35240 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:30.204087 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:30.215216 systemd-logind[1560]: New session 17 of user core. Jan 13 21:52:30.221181 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:52:31.056527 sshd[4288]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:31.062717 systemd[1]: sshd@14-172.24.4.62:22-172.24.4.1:35240.service: Deactivated successfully. Jan 13 21:52:31.071647 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:52:31.074756 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:52:31.078977 systemd-logind[1560]: Removed session 17. Jan 13 21:52:36.066627 systemd[1]: Started sshd@15-172.24.4.62:22-172.24.4.1:34146.service - OpenSSH per-connection server daemon (172.24.4.1:34146). Jan 13 21:52:37.139511 sshd[4304]: Accepted publickey for core from 172.24.4.1 port 34146 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:37.142453 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:37.151715 systemd-logind[1560]: New session 18 of user core. Jan 13 21:52:37.157624 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:52:37.912768 sshd[4304]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:37.924768 systemd[1]: Started sshd@16-172.24.4.62:22-172.24.4.1:34156.service - OpenSSH per-connection server daemon (172.24.4.1:34156). Jan 13 21:52:37.925869 systemd[1]: sshd@15-172.24.4.62:22-172.24.4.1:34146.service: Deactivated successfully. Jan 13 21:52:37.935495 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:52:37.940270 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:52:37.945049 systemd-logind[1560]: Removed session 18. Jan 13 21:52:40.069769 sshd[4315]: Accepted publickey for core from 172.24.4.1 port 34156 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:40.072572 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:40.086269 systemd-logind[1560]: New session 19 of user core. Jan 13 21:52:40.092747 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:52:40.981390 sshd[4315]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:40.992497 systemd[1]: Started sshd@17-172.24.4.62:22-172.24.4.1:34158.service - OpenSSH per-connection server daemon (172.24.4.1:34158). Jan 13 21:52:40.993000 systemd[1]: sshd@16-172.24.4.62:22-172.24.4.1:34156.service: Deactivated successfully. Jan 13 21:52:40.996029 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:52:40.998125 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:52:40.999906 systemd-logind[1560]: Removed session 19. Jan 13 21:52:42.130745 sshd[4328]: Accepted publickey for core from 172.24.4.1 port 34158 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:42.133510 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:42.144013 systemd-logind[1560]: New session 20 of user core. Jan 13 21:52:42.150783 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:52:44.746455 sshd[4328]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:44.752870 systemd[1]: Started sshd@18-172.24.4.62:22-172.24.4.1:45936.service - OpenSSH per-connection server daemon (172.24.4.1:45936). Jan 13 21:52:44.754356 systemd[1]: sshd@17-172.24.4.62:22-172.24.4.1:34158.service: Deactivated successfully. Jan 13 21:52:44.759841 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:52:44.760265 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:52:44.763672 systemd-logind[1560]: Removed session 20. Jan 13 21:52:46.114032 sshd[4346]: Accepted publickey for core from 172.24.4.1 port 45936 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:46.117021 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:46.129225 systemd-logind[1560]: New session 21 of user core. Jan 13 21:52:46.134992 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:52:46.912405 sshd[4346]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:46.916841 systemd[1]: sshd@18-172.24.4.62:22-172.24.4.1:45936.service: Deactivated successfully. Jan 13 21:52:46.920426 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:52:46.924388 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:52:46.931820 systemd[1]: Started sshd@19-172.24.4.62:22-172.24.4.1:45942.service - OpenSSH per-connection server daemon (172.24.4.1:45942). Jan 13 21:52:46.940087 systemd-logind[1560]: Removed session 21. Jan 13 21:52:47.999276 sshd[4360]: Accepted publickey for core from 172.24.4.1 port 45942 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:48.001936 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:48.011451 systemd-logind[1560]: New session 22 of user core. Jan 13 21:52:48.020712 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:52:48.785550 sshd[4360]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:48.791426 systemd[1]: sshd@19-172.24.4.62:22-172.24.4.1:45942.service: Deactivated successfully. Jan 13 21:52:48.799426 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:52:48.800393 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:52:48.803752 systemd-logind[1560]: Removed session 22. Jan 13 21:52:53.795611 systemd[1]: Started sshd@20-172.24.4.62:22-172.24.4.1:37178.service - OpenSSH per-connection server daemon (172.24.4.1:37178). Jan 13 21:52:55.204202 sshd[4379]: Accepted publickey for core from 172.24.4.1 port 37178 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:55.206958 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:55.218225 systemd-logind[1560]: New session 23 of user core. Jan 13 21:52:55.225567 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:52:55.945303 sshd[4379]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:55.957244 systemd[1]: sshd@20-172.24.4.62:22-172.24.4.1:37178.service: Deactivated successfully. Jan 13 21:52:55.968617 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:52:55.970623 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:52:55.973639 systemd-logind[1560]: Removed session 23. Jan 13 21:53:00.957689 systemd[1]: Started sshd@21-172.24.4.62:22-172.24.4.1:37192.service - OpenSSH per-connection server daemon (172.24.4.1:37192). Jan 13 21:53:02.335517 sshd[4393]: Accepted publickey for core from 172.24.4.1 port 37192 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:53:02.338481 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:53:02.350174 systemd-logind[1560]: New session 24 of user core. Jan 13 21:53:02.356777 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:53:03.111005 sshd[4393]: pam_unix(sshd:session): session closed for user core Jan 13 21:53:03.114525 systemd[1]: sshd@21-172.24.4.62:22-172.24.4.1:37192.service: Deactivated successfully. Jan 13 21:53:03.119928 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:53:03.121489 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:53:03.126321 systemd[1]: Started sshd@22-172.24.4.62:22-172.24.4.1:37198.service - OpenSSH per-connection server daemon (172.24.4.1:37198). Jan 13 21:53:03.128527 systemd-logind[1560]: Removed session 24. Jan 13 21:53:04.207285 sshd[4407]: Accepted publickey for core from 172.24.4.1 port 37198 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:53:04.210593 sshd[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:53:04.222063 systemd-logind[1560]: New session 25 of user core. Jan 13 21:53:04.229637 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:53:06.312647 containerd[1580]: time="2025-01-13T21:53:06.312581167Z" level=info msg="StopContainer for \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\" with timeout 30 (s)" Jan 13 21:53:06.313497 containerd[1580]: time="2025-01-13T21:53:06.313468680Z" level=info msg="Stop container \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\" with signal terminated" Jan 13 21:53:06.334164 systemd[1]: run-containerd-runc-k8s.io-04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b-runc.eMUnL1.mount: Deactivated successfully. Jan 13 21:53:06.348170 containerd[1580]: time="2025-01-13T21:53:06.347487462Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:53:06.356786 containerd[1580]: time="2025-01-13T21:53:06.356754098Z" level=info msg="StopContainer for \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\" with timeout 2 (s)" Jan 13 21:53:06.357264 containerd[1580]: time="2025-01-13T21:53:06.357246976Z" level=info msg="Stop container \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\" with signal terminated" Jan 13 21:53:06.363693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d-rootfs.mount: Deactivated successfully. Jan 13 21:53:06.369937 systemd-networkd[1210]: lxc_health: Link DOWN Jan 13 21:53:06.369947 systemd-networkd[1210]: lxc_health: Lost carrier Jan 13 21:53:06.404705 containerd[1580]: time="2025-01-13T21:53:06.404233438Z" level=info msg="shim disconnected" id=9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d namespace=k8s.io Jan 13 21:53:06.404705 containerd[1580]: time="2025-01-13T21:53:06.404338311Z" level=warning msg="cleaning up after shim disconnected" id=9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d namespace=k8s.io Jan 13 21:53:06.404705 containerd[1580]: time="2025-01-13T21:53:06.404390272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:53:06.424138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b-rootfs.mount: Deactivated successfully. Jan 13 21:53:06.454827 containerd[1580]: time="2025-01-13T21:53:06.454733550Z" level=info msg="StopContainer for \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\" returns successfully" Jan 13 21:53:06.456143 containerd[1580]: time="2025-01-13T21:53:06.455949219Z" level=info msg="StopPodSandbox for \"7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a\"" Jan 13 21:53:06.456143 containerd[1580]: time="2025-01-13T21:53:06.455985459Z" level=info msg="Container to stop \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:53:06.460169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a-shm.mount: Deactivated successfully. Jan 13 21:53:06.474726 containerd[1580]: time="2025-01-13T21:53:06.474405714Z" level=info msg="shim disconnected" id=04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b namespace=k8s.io Jan 13 21:53:06.474726 containerd[1580]: time="2025-01-13T21:53:06.474497533Z" level=warning msg="cleaning up after shim disconnected" id=04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b namespace=k8s.io Jan 13 21:53:06.474726 containerd[1580]: time="2025-01-13T21:53:06.474517180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:53:06.521521 containerd[1580]: time="2025-01-13T21:53:06.521412235Z" level=info msg="StopContainer for \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\" returns successfully" Jan 13 21:53:06.521889 containerd[1580]: time="2025-01-13T21:53:06.521848342Z" level=info msg="StopPodSandbox for \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\"" Jan 13 21:53:06.521981 containerd[1580]: time="2025-01-13T21:53:06.521894011Z" level=info msg="Container to stop \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:53:06.521981 containerd[1580]: time="2025-01-13T21:53:06.521907696Z" level=info msg="Container to stop \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:53:06.521981 containerd[1580]: time="2025-01-13T21:53:06.521919590Z" level=info msg="Container to stop \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:53:06.521981 containerd[1580]: time="2025-01-13T21:53:06.521930762Z" level=info msg="Container to stop \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:53:06.521981 containerd[1580]: time="2025-01-13T21:53:06.521941914Z" level=info msg="Container to stop \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:53:06.541869 containerd[1580]: time="2025-01-13T21:53:06.541766373Z" level=info msg="shim disconnected" id=7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a namespace=k8s.io Jan 13 21:53:06.542008 containerd[1580]: time="2025-01-13T21:53:06.541865145Z" level=warning msg="cleaning up after shim disconnected" id=7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a namespace=k8s.io Jan 13 21:53:06.542008 containerd[1580]: time="2025-01-13T21:53:06.541887729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:53:06.574705 containerd[1580]: time="2025-01-13T21:53:06.563894975Z" level=info msg="TearDown network for sandbox \"7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a\" successfully" Jan 13 21:53:06.574705 containerd[1580]: time="2025-01-13T21:53:06.563931696Z" level=info msg="StopPodSandbox for \"7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a\" returns successfully" Jan 13 21:53:06.601198 containerd[1580]: time="2025-01-13T21:53:06.601057209Z" level=info msg="shim disconnected" id=bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802 namespace=k8s.io Jan 13 21:53:06.601198 containerd[1580]: time="2025-01-13T21:53:06.601142865Z" level=warning msg="cleaning up after shim disconnected" id=bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802 namespace=k8s.io Jan 13 21:53:06.601198 containerd[1580]: time="2025-01-13T21:53:06.601152994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:53:06.615267 containerd[1580]: time="2025-01-13T21:53:06.615207585Z" level=info msg="TearDown network for sandbox \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" successfully" Jan 13 21:53:06.615267 containerd[1580]: time="2025-01-13T21:53:06.615246320Z" level=info msg="StopPodSandbox for \"bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802\" returns successfully" Jan 13 21:53:06.680011 kubelet[2844]: I0113 21:53:06.679143 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-etc-cni-netd\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680011 kubelet[2844]: I0113 21:53:06.679199 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab32633d-1989-46c7-a9f8-25caed4c696b-clustermesh-secrets\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680011 kubelet[2844]: I0113 21:53:06.679240 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-host-proc-sys-kernel\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680011 kubelet[2844]: I0113 21:53:06.679265 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-hubble-tls\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680011 kubelet[2844]: I0113 21:53:06.679292 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-config-path\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680011 kubelet[2844]: I0113 21:53:06.679313 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-lib-modules\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680555 kubelet[2844]: I0113 21:53:06.679337 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-cgroup\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680555 kubelet[2844]: I0113 21:53:06.679359 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-host-proc-sys-net\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680555 kubelet[2844]: I0113 21:53:06.679337 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.680555 kubelet[2844]: I0113 21:53:06.679384 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmxcz\" (UniqueName: \"kubernetes.io/projected/4fb08bff-c63e-45dc-b459-e2214fc25561-kube-api-access-fmxcz\") pod \"4fb08bff-c63e-45dc-b459-e2214fc25561\" (UID: \"4fb08bff-c63e-45dc-b459-e2214fc25561\") " Jan 13 21:53:06.680555 kubelet[2844]: I0113 21:53:06.679412 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fb08bff-c63e-45dc-b459-e2214fc25561-cilium-config-path\") pod \"4fb08bff-c63e-45dc-b459-e2214fc25561\" (UID: \"4fb08bff-c63e-45dc-b459-e2214fc25561\") " Jan 13 21:53:06.680555 kubelet[2844]: I0113 21:53:06.679435 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-xtables-lock\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680701 kubelet[2844]: I0113 21:53:06.679459 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-bpf-maps\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680701 kubelet[2844]: I0113 21:53:06.679479 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-run\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680701 kubelet[2844]: I0113 21:53:06.679498 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-hostproc\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680701 kubelet[2844]: I0113 21:53:06.679524 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smnkz\" (UniqueName: \"kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-kube-api-access-smnkz\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680701 kubelet[2844]: I0113 21:53:06.679548 2844 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cni-path\") pod \"ab32633d-1989-46c7-a9f8-25caed4c696b\" (UID: \"ab32633d-1989-46c7-a9f8-25caed4c696b\") " Jan 13 21:53:06.680701 kubelet[2844]: I0113 21:53:06.679586 2844 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-etc-cni-netd\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.680837 kubelet[2844]: I0113 21:53:06.679620 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cni-path" (OuterVolumeSpecName: "cni-path") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.680837 kubelet[2844]: I0113 21:53:06.679650 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.680837 kubelet[2844]: I0113 21:53:06.679668 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.680837 kubelet[2844]: I0113 21:53:06.679687 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.684959 kubelet[2844]: I0113 21:53:06.684798 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:53:06.686351 kubelet[2844]: I0113 21:53:06.686211 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fb08bff-c63e-45dc-b459-e2214fc25561-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4fb08bff-c63e-45dc-b459-e2214fc25561" (UID: "4fb08bff-c63e-45dc-b459-e2214fc25561"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:53:06.686351 kubelet[2844]: I0113 21:53:06.686290 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.686351 kubelet[2844]: I0113 21:53:06.686312 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.686351 kubelet[2844]: I0113 21:53:06.686329 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.686589 kubelet[2844]: I0113 21:53:06.686528 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-hostproc" (OuterVolumeSpecName: "hostproc") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.687783 kubelet[2844]: I0113 21:53:06.687666 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb08bff-c63e-45dc-b459-e2214fc25561-kube-api-access-fmxcz" (OuterVolumeSpecName: "kube-api-access-fmxcz") pod "4fb08bff-c63e-45dc-b459-e2214fc25561" (UID: "4fb08bff-c63e-45dc-b459-e2214fc25561"). InnerVolumeSpecName "kube-api-access-fmxcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:53:06.687783 kubelet[2844]: I0113 21:53:06.687744 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:53:06.690650 kubelet[2844]: I0113 21:53:06.690596 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab32633d-1989-46c7-a9f8-25caed4c696b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:53:06.691409 kubelet[2844]: I0113 21:53:06.691302 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-kube-api-access-smnkz" (OuterVolumeSpecName: "kube-api-access-smnkz") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "kube-api-access-smnkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:53:06.692171 kubelet[2844]: I0113 21:53:06.692144 2844 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ab32633d-1989-46c7-a9f8-25caed4c696b" (UID: "ab32633d-1989-46c7-a9f8-25caed4c696b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:53:06.780546 kubelet[2844]: I0113 21:53:06.780458 2844 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-xtables-lock\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780546 kubelet[2844]: I0113 21:53:06.780525 2844 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-bpf-maps\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780546 kubelet[2844]: I0113 21:53:06.780559 2844 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-run\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780873 kubelet[2844]: I0113 21:53:06.780618 2844 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-hostproc\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780873 kubelet[2844]: I0113 21:53:06.780654 2844 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-smnkz\" (UniqueName: \"kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-kube-api-access-smnkz\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780873 kubelet[2844]: I0113 21:53:06.780682 2844 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cni-path\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780873 kubelet[2844]: I0113 21:53:06.780712 2844 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab32633d-1989-46c7-a9f8-25caed4c696b-clustermesh-secrets\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780873 kubelet[2844]: I0113 21:53:06.780743 2844 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-host-proc-sys-kernel\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780873 kubelet[2844]: I0113 21:53:06.780773 2844 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-config-path\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.780873 kubelet[2844]: I0113 21:53:06.780806 2844 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-lib-modules\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.781359 kubelet[2844]: I0113 21:53:06.780884 2844 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-cilium-cgroup\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.781359 kubelet[2844]: I0113 21:53:06.780912 2844 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab32633d-1989-46c7-a9f8-25caed4c696b-hubble-tls\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.781359 kubelet[2844]: I0113 21:53:06.780942 2844 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab32633d-1989-46c7-a9f8-25caed4c696b-host-proc-sys-net\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.781359 kubelet[2844]: I0113 21:53:06.780972 2844 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fmxcz\" (UniqueName: \"kubernetes.io/projected/4fb08bff-c63e-45dc-b459-e2214fc25561-kube-api-access-fmxcz\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:06.781359 kubelet[2844]: I0113 21:53:06.781002 2844 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fb08bff-c63e-45dc-b459-e2214fc25561-cilium-config-path\") on node \"ci-4081-3-0-d-9566454817.novalocal\" DevicePath \"\"" Jan 13 21:53:07.326759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f16cfd8c8d416d9f631dcbc92095ca42395441dc7be661403620d7180496d9a-rootfs.mount: Deactivated successfully. Jan 13 21:53:07.327616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802-rootfs.mount: Deactivated successfully. Jan 13 21:53:07.327883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdec562d19bff01578b4bfad6f6dd973fa854534a7170053cae334ed0e29b802-shm.mount: Deactivated successfully. Jan 13 21:53:07.328170 systemd[1]: var-lib-kubelet-pods-ab32633d\x2d1989\x2d46c7\x2da9f8\x2d25caed4c696b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:53:07.328422 systemd[1]: var-lib-kubelet-pods-ab32633d\x2d1989\x2d46c7\x2da9f8\x2d25caed4c696b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:53:07.328666 systemd[1]: var-lib-kubelet-pods-4fb08bff\x2dc63e\x2d45dc\x2db459\x2de2214fc25561-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmxcz.mount: Deactivated successfully. Jan 13 21:53:07.328915 systemd[1]: var-lib-kubelet-pods-ab32633d\x2d1989\x2d46c7\x2da9f8\x2d25caed4c696b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsmnkz.mount: Deactivated successfully. Jan 13 21:53:07.347577 kubelet[2844]: I0113 21:53:07.345437 2844 scope.go:117] "RemoveContainer" containerID="9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d" Jan 13 21:53:07.361861 containerd[1580]: time="2025-01-13T21:53:07.359383427Z" level=info msg="RemoveContainer for \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\"" Jan 13 21:53:07.390710 containerd[1580]: time="2025-01-13T21:53:07.390471152Z" level=info msg="RemoveContainer for \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\" returns successfully" Jan 13 21:53:07.393332 kubelet[2844]: I0113 21:53:07.391832 2844 scope.go:117] "RemoveContainer" containerID="9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d" Jan 13 21:53:07.395560 containerd[1580]: time="2025-01-13T21:53:07.395419257Z" level=error msg="ContainerStatus for \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\": not found" Jan 13 21:53:07.395874 kubelet[2844]: E0113 21:53:07.395837 2844 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\": not found" containerID="9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d" Jan 13 21:53:07.396183 kubelet[2844]: I0113 21:53:07.396159 2844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d"} err="failed to get container status \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b804b7da45fdb4101e0a5ce063e43ba7cec04b07957cf524c425d5acff4994d\": not found" Jan 13 21:53:07.396904 kubelet[2844]: I0113 21:53:07.396648 2844 scope.go:117] "RemoveContainer" containerID="04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b" Jan 13 21:53:07.398778 containerd[1580]: time="2025-01-13T21:53:07.398657033Z" level=info msg="RemoveContainer for \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\"" Jan 13 21:53:07.403024 containerd[1580]: time="2025-01-13T21:53:07.402988952Z" level=info msg="RemoveContainer for \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\" returns successfully" Jan 13 21:53:07.403357 kubelet[2844]: I0113 21:53:07.403319 2844 scope.go:117] "RemoveContainer" containerID="bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7" Jan 13 21:53:07.404507 containerd[1580]: time="2025-01-13T21:53:07.404463524Z" level=info msg="RemoveContainer for \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\"" Jan 13 21:53:07.410339 containerd[1580]: time="2025-01-13T21:53:07.410222653Z" level=info msg="RemoveContainer for \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\" returns successfully" Jan 13 21:53:07.410990 kubelet[2844]: I0113 21:53:07.410787 2844 scope.go:117] "RemoveContainer" containerID="c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99" Jan 13 21:53:07.413814 containerd[1580]: time="2025-01-13T21:53:07.413784387Z" level=info msg="RemoveContainer for \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\"" Jan 13 21:53:07.417457 containerd[1580]: time="2025-01-13T21:53:07.417433480Z" level=info msg="RemoveContainer for \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\" returns successfully" Jan 13 21:53:07.417614 kubelet[2844]: I0113 21:53:07.417595 2844 scope.go:117] "RemoveContainer" containerID="25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e" Jan 13 21:53:07.418657 containerd[1580]: time="2025-01-13T21:53:07.418633570Z" level=info msg="RemoveContainer for \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\"" Jan 13 21:53:07.423453 containerd[1580]: time="2025-01-13T21:53:07.423431874Z" level=info msg="RemoveContainer for \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\" returns successfully" Jan 13 21:53:07.423784 kubelet[2844]: I0113 21:53:07.423759 2844 scope.go:117] "RemoveContainer" containerID="640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e" Jan 13 21:53:07.424994 containerd[1580]: time="2025-01-13T21:53:07.424963657Z" level=info msg="RemoveContainer for \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\"" Jan 13 21:53:07.428844 containerd[1580]: time="2025-01-13T21:53:07.428803852Z" level=info msg="RemoveContainer for \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\" returns successfully" Jan 13 21:53:07.429018 kubelet[2844]: I0113 21:53:07.428982 2844 scope.go:117] "RemoveContainer" containerID="04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b" Jan 13 21:53:07.429381 containerd[1580]: time="2025-01-13T21:53:07.429281319Z" level=error msg="ContainerStatus for \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\": not found" Jan 13 21:53:07.429715 kubelet[2844]: E0113 21:53:07.429608 2844 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\": not found" containerID="04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b" Jan 13 21:53:07.429715 kubelet[2844]: I0113 21:53:07.429668 2844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b"} err="failed to get container status \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\": rpc error: code = NotFound desc = an error occurred when try to find container \"04dc9cdf3a6af445d64125032a84798e9d3594708de859697858d2ebe542f30b\": not found" Jan 13 21:53:07.429715 kubelet[2844]: I0113 21:53:07.429683 2844 scope.go:117] "RemoveContainer" containerID="bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7" Jan 13 21:53:07.430129 containerd[1580]: time="2025-01-13T21:53:07.430025794Z" level=error msg="ContainerStatus for \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\": not found" Jan 13 21:53:07.430237 kubelet[2844]: E0113 21:53:07.430221 2844 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\": not found" containerID="bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7" Jan 13 21:53:07.430291 kubelet[2844]: I0113 21:53:07.430255 2844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7"} err="failed to get container status \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"bff2b246e15790ad85e6c876db47cdf7d48e1d12f90eefb3510113859fdf92f7\": not found" Jan 13 21:53:07.430291 kubelet[2844]: I0113 21:53:07.430268 2844 scope.go:117] "RemoveContainer" containerID="c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99" Jan 13 21:53:07.430444 containerd[1580]: time="2025-01-13T21:53:07.430413266Z" level=error msg="ContainerStatus for \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\": not found" Jan 13 21:53:07.430541 kubelet[2844]: E0113 21:53:07.430524 2844 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\": not found" containerID="c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99" Jan 13 21:53:07.430582 kubelet[2844]: I0113 21:53:07.430555 2844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99"} err="failed to get container status \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2ec103d706941547ed94e2e7048c5fae7b4b78a8df69b26858666e8040a4e99\": not found" Jan 13 21:53:07.430582 kubelet[2844]: I0113 21:53:07.430565 2844 scope.go:117] "RemoveContainer" containerID="25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e" Jan 13 21:53:07.431046 containerd[1580]: time="2025-01-13T21:53:07.430766642Z" level=error msg="ContainerStatus for \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\": not found" Jan 13 21:53:07.431128 kubelet[2844]: E0113 21:53:07.430938 2844 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\": not found" containerID="25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e" Jan 13 21:53:07.431128 kubelet[2844]: I0113 21:53:07.430966 2844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e"} err="failed to get container status \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"25a65925c429e9b6772f31fe785e5fc815142fa40f0c72d8e029458da9d3cd3e\": not found" Jan 13 21:53:07.431128 kubelet[2844]: I0113 21:53:07.430978 2844 scope.go:117] "RemoveContainer" containerID="640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e" Jan 13 21:53:07.431219 containerd[1580]: time="2025-01-13T21:53:07.431155987Z" level=error msg="ContainerStatus for \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\": not found" Jan 13 21:53:07.431384 kubelet[2844]: E0113 21:53:07.431320 2844 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\": not found" containerID="640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e" Jan 13 21:53:07.431384 kubelet[2844]: I0113 21:53:07.431350 2844 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e"} err="failed to get container status \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"640df63f4c3d17dc59367779a368a80d5cbab585c5df47ec07d6def0a46fcc6e\": not found" Jan 13 21:53:07.765601 kubelet[2844]: I0113 21:53:07.765544 2844 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4fb08bff-c63e-45dc-b459-e2214fc25561" path="/var/lib/kubelet/pods/4fb08bff-c63e-45dc-b459-e2214fc25561/volumes" Jan 13 21:53:07.766651 kubelet[2844]: I0113 21:53:07.766609 2844 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ab32633d-1989-46c7-a9f8-25caed4c696b" path="/var/lib/kubelet/pods/ab32633d-1989-46c7-a9f8-25caed4c696b/volumes" Jan 13 21:53:08.341626 sshd[4407]: pam_unix(sshd:session): session closed for user core Jan 13 21:53:08.354162 systemd[1]: Started sshd@23-172.24.4.62:22-172.24.4.1:43246.service - OpenSSH per-connection server daemon (172.24.4.1:43246). Jan 13 21:53:08.360236 systemd[1]: sshd@22-172.24.4.62:22-172.24.4.1:37198.service: Deactivated successfully. Jan 13 21:53:08.372387 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:53:08.379428 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:53:08.384730 systemd-logind[1560]: Removed session 25. Jan 13 21:53:08.942595 kubelet[2844]: E0113 21:53:08.942528 2844 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:53:09.714602 sshd[4574]: Accepted publickey for core from 172.24.4.1 port 43246 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:53:09.717300 sshd[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:53:09.728930 systemd-logind[1560]: New session 26 of user core. Jan 13 21:53:09.734673 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:53:11.221942 kubelet[2844]: I0113 21:53:11.221885 2844 topology_manager.go:215] "Topology Admit Handler" podUID="ced1b13b-746c-41c9-afc6-3f30f6806776" podNamespace="kube-system" podName="cilium-vqdpn" Jan 13 21:53:11.221942 kubelet[2844]: E0113 21:53:11.221950 2844 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab32633d-1989-46c7-a9f8-25caed4c696b" containerName="clean-cilium-state" Jan 13 21:53:11.222467 kubelet[2844]: E0113 21:53:11.221963 2844 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4fb08bff-c63e-45dc-b459-e2214fc25561" containerName="cilium-operator" Jan 13 21:53:11.222467 kubelet[2844]: E0113 21:53:11.221973 2844 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab32633d-1989-46c7-a9f8-25caed4c696b" containerName="mount-cgroup" Jan 13 21:53:11.222467 kubelet[2844]: E0113 21:53:11.221982 2844 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab32633d-1989-46c7-a9f8-25caed4c696b" containerName="apply-sysctl-overwrites" Jan 13 21:53:11.222467 kubelet[2844]: E0113 21:53:11.221989 2844 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab32633d-1989-46c7-a9f8-25caed4c696b" containerName="mount-bpf-fs" Jan 13 21:53:11.222467 kubelet[2844]: E0113 21:53:11.221998 2844 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab32633d-1989-46c7-a9f8-25caed4c696b" containerName="cilium-agent" Jan 13 21:53:11.222467 kubelet[2844]: I0113 21:53:11.222021 2844 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab32633d-1989-46c7-a9f8-25caed4c696b" containerName="cilium-agent" Jan 13 21:53:11.222467 kubelet[2844]: I0113 21:53:11.222029 2844 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb08bff-c63e-45dc-b459-e2214fc25561" containerName="cilium-operator" Jan 13 21:53:11.312437 kubelet[2844]: I0113 21:53:11.312394 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-bpf-maps\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.312437 kubelet[2844]: I0113 21:53:11.312440 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-cni-path\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.312437 kubelet[2844]: I0113 21:53:11.312481 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ced1b13b-746c-41c9-afc6-3f30f6806776-clustermesh-secrets\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313044 kubelet[2844]: I0113 21:53:11.312509 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-host-proc-sys-kernel\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313044 kubelet[2844]: I0113 21:53:11.312537 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-etc-cni-netd\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313044 kubelet[2844]: I0113 21:53:11.312560 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-lib-modules\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313044 kubelet[2844]: I0113 21:53:11.312583 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-xtables-lock\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313044 kubelet[2844]: I0113 21:53:11.312606 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ced1b13b-746c-41c9-afc6-3f30f6806776-cilium-ipsec-secrets\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313044 kubelet[2844]: I0113 21:53:11.312630 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-cilium-run\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313573 kubelet[2844]: I0113 21:53:11.312653 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-cilium-cgroup\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313573 kubelet[2844]: I0113 21:53:11.312678 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ced1b13b-746c-41c9-afc6-3f30f6806776-cilium-config-path\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313573 kubelet[2844]: I0113 21:53:11.312703 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fcx\" (UniqueName: \"kubernetes.io/projected/ced1b13b-746c-41c9-afc6-3f30f6806776-kube-api-access-j8fcx\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313573 kubelet[2844]: I0113 21:53:11.312726 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-hostproc\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313573 kubelet[2844]: I0113 21:53:11.312751 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ced1b13b-746c-41c9-afc6-3f30f6806776-hubble-tls\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.313573 kubelet[2844]: I0113 21:53:11.312776 2844 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ced1b13b-746c-41c9-afc6-3f30f6806776-host-proc-sys-net\") pod \"cilium-vqdpn\" (UID: \"ced1b13b-746c-41c9-afc6-3f30f6806776\") " pod="kube-system/cilium-vqdpn" Jan 13 21:53:11.318588 sshd[4574]: pam_unix(sshd:session): session closed for user core Jan 13 21:53:11.325360 systemd[1]: Started sshd@24-172.24.4.62:22-172.24.4.1:43254.service - OpenSSH per-connection server daemon (172.24.4.1:43254). Jan 13 21:53:11.325779 systemd[1]: sshd@23-172.24.4.62:22-172.24.4.1:43246.service: Deactivated successfully. Jan 13 21:53:11.332924 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:53:11.333655 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:53:11.335364 systemd-logind[1560]: Removed session 26. Jan 13 21:53:11.531292 containerd[1580]: time="2025-01-13T21:53:11.530803140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vqdpn,Uid:ced1b13b-746c-41c9-afc6-3f30f6806776,Namespace:kube-system,Attempt:0,}" Jan 13 21:53:11.562164 containerd[1580]: time="2025-01-13T21:53:11.558558971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:53:11.562164 containerd[1580]: time="2025-01-13T21:53:11.558608206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:53:11.562164 containerd[1580]: time="2025-01-13T21:53:11.558621392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:11.562624 containerd[1580]: time="2025-01-13T21:53:11.559054292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:11.611006 containerd[1580]: time="2025-01-13T21:53:11.610905135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vqdpn,Uid:ced1b13b-746c-41c9-afc6-3f30f6806776,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\"" Jan 13 21:53:11.615442 containerd[1580]: time="2025-01-13T21:53:11.615380736Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:53:11.689942 containerd[1580]: time="2025-01-13T21:53:11.689873701Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83afe240e8ad92c9406fb243fe860ebca687de01f93807e39c4207b3820cd7f4\"" Jan 13 21:53:11.692130 containerd[1580]: time="2025-01-13T21:53:11.690567699Z" level=info msg="StartContainer for \"83afe240e8ad92c9406fb243fe860ebca687de01f93807e39c4207b3820cd7f4\"" Jan 13 21:53:11.744324 containerd[1580]: time="2025-01-13T21:53:11.744162298Z" level=info msg="StartContainer for \"83afe240e8ad92c9406fb243fe860ebca687de01f93807e39c4207b3820cd7f4\" returns successfully" Jan 13 21:53:11.797205 containerd[1580]: time="2025-01-13T21:53:11.797021619Z" level=info msg="shim disconnected" id=83afe240e8ad92c9406fb243fe860ebca687de01f93807e39c4207b3820cd7f4 namespace=k8s.io Jan 13 21:53:11.797763 containerd[1580]: time="2025-01-13T21:53:11.797436615Z" level=warning msg="cleaning up after shim disconnected" id=83afe240e8ad92c9406fb243fe860ebca687de01f93807e39c4207b3820cd7f4 namespace=k8s.io Jan 13 21:53:11.797763 containerd[1580]: time="2025-01-13T21:53:11.797467074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:53:12.402014 containerd[1580]: time="2025-01-13T21:53:12.401645308Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:53:12.461459 containerd[1580]: time="2025-01-13T21:53:12.461275682Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06dc426a7f178ef86ae54733daa6eceb90a43a4bd2ce060f92fb07328929302a\"" Jan 13 21:53:12.462451 containerd[1580]: time="2025-01-13T21:53:12.462277728Z" level=info msg="StartContainer for \"06dc426a7f178ef86ae54733daa6eceb90a43a4bd2ce060f92fb07328929302a\"" Jan 13 21:53:12.535764 containerd[1580]: time="2025-01-13T21:53:12.535706746Z" level=info msg="StartContainer for \"06dc426a7f178ef86ae54733daa6eceb90a43a4bd2ce060f92fb07328929302a\" returns successfully" Jan 13 21:53:12.571323 sshd[4586]: Accepted publickey for core from 172.24.4.1 port 43254 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:53:12.573936 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:53:12.581533 systemd-logind[1560]: New session 27 of user core. Jan 13 21:53:12.585997 containerd[1580]: time="2025-01-13T21:53:12.583894722Z" level=info msg="shim disconnected" id=06dc426a7f178ef86ae54733daa6eceb90a43a4bd2ce060f92fb07328929302a namespace=k8s.io Jan 13 21:53:12.585997 containerd[1580]: time="2025-01-13T21:53:12.583962884Z" level=warning msg="cleaning up after shim disconnected" id=06dc426a7f178ef86ae54733daa6eceb90a43a4bd2ce060f92fb07328929302a namespace=k8s.io Jan 13 21:53:12.585997 containerd[1580]: time="2025-01-13T21:53:12.583977823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:53:12.590542 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:53:13.276415 sshd[4586]: pam_unix(sshd:session): session closed for user core Jan 13 21:53:13.292259 systemd[1]: Started sshd@25-172.24.4.62:22-172.24.4.1:43262.service - OpenSSH per-connection server daemon (172.24.4.1:43262). Jan 13 21:53:13.298210 systemd[1]: sshd@24-172.24.4.62:22-172.24.4.1:43254.service: Deactivated successfully. Jan 13 21:53:13.310286 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:53:13.319327 systemd-logind[1560]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:53:13.325191 systemd-logind[1560]: Removed session 27. Jan 13 21:53:13.410324 containerd[1580]: time="2025-01-13T21:53:13.409532695Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:53:13.436328 systemd[1]: run-containerd-runc-k8s.io-06dc426a7f178ef86ae54733daa6eceb90a43a4bd2ce060f92fb07328929302a-runc.XUtIx3.mount: Deactivated successfully. Jan 13 21:53:13.436658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06dc426a7f178ef86ae54733daa6eceb90a43a4bd2ce060f92fb07328929302a-rootfs.mount: Deactivated successfully. Jan 13 21:53:13.452906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890206851.mount: Deactivated successfully. Jan 13 21:53:13.458400 containerd[1580]: time="2025-01-13T21:53:13.458315813Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"67c4887c8373b963e2282d33666fca069873f831b5c28877201e54d0bc8a2005\"" Jan 13 21:53:13.459052 containerd[1580]: time="2025-01-13T21:53:13.459002025Z" level=info msg="StartContainer for \"67c4887c8373b963e2282d33666fca069873f831b5c28877201e54d0bc8a2005\"" Jan 13 21:53:13.519359 containerd[1580]: time="2025-01-13T21:53:13.518769292Z" level=info msg="StartContainer for \"67c4887c8373b963e2282d33666fca069873f831b5c28877201e54d0bc8a2005\" returns successfully" Jan 13 21:53:13.547489 containerd[1580]: time="2025-01-13T21:53:13.547346573Z" level=info msg="shim disconnected" id=67c4887c8373b963e2282d33666fca069873f831b5c28877201e54d0bc8a2005 namespace=k8s.io Jan 13 21:53:13.547489 containerd[1580]: time="2025-01-13T21:53:13.547396790Z" level=warning msg="cleaning up after shim disconnected" id=67c4887c8373b963e2282d33666fca069873f831b5c28877201e54d0bc8a2005 namespace=k8s.io Jan 13 21:53:13.547489 containerd[1580]: time="2025-01-13T21:53:13.547406670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:53:13.561324 containerd[1580]: time="2025-01-13T21:53:13.561275069Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:53:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:53:13.943890 kubelet[2844]: E0113 21:53:13.943819 2844 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:53:14.417743 containerd[1580]: time="2025-01-13T21:53:14.417227820Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:53:14.429994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67c4887c8373b963e2282d33666fca069873f831b5c28877201e54d0bc8a2005-rootfs.mount: Deactivated successfully. Jan 13 21:53:14.467234 sshd[4767]: Accepted publickey for core from 172.24.4.1 port 43262 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:53:14.467148 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:53:14.468373 containerd[1580]: time="2025-01-13T21:53:14.459591382Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce3ca1a0ed1969ddcdeb1edd9649ee0533d7364c9fc1c26f5e733dcb5e7712c4\"" Jan 13 21:53:14.468373 containerd[1580]: time="2025-01-13T21:53:14.461763924Z" level=info msg="StartContainer for \"ce3ca1a0ed1969ddcdeb1edd9649ee0533d7364c9fc1c26f5e733dcb5e7712c4\"" Jan 13 21:53:14.480547 systemd-logind[1560]: New session 28 of user core. Jan 13 21:53:14.486376 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:53:14.549243 containerd[1580]: time="2025-01-13T21:53:14.549180214Z" level=info msg="StartContainer for \"ce3ca1a0ed1969ddcdeb1edd9649ee0533d7364c9fc1c26f5e733dcb5e7712c4\" returns successfully" Jan 13 21:53:14.578195 containerd[1580]: time="2025-01-13T21:53:14.576781987Z" level=info msg="shim disconnected" id=ce3ca1a0ed1969ddcdeb1edd9649ee0533d7364c9fc1c26f5e733dcb5e7712c4 namespace=k8s.io Jan 13 21:53:14.578195 containerd[1580]: time="2025-01-13T21:53:14.577848998Z" level=warning msg="cleaning up after shim disconnected" id=ce3ca1a0ed1969ddcdeb1edd9649ee0533d7364c9fc1c26f5e733dcb5e7712c4 namespace=k8s.io Jan 13 21:53:14.578195 containerd[1580]: time="2025-01-13T21:53:14.577865669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:53:14.584198 containerd[1580]: time="2025-01-13T21:53:14.584145179Z" level=error msg="collecting metrics for ce3ca1a0ed1969ddcdeb1edd9649ee0533d7364c9fc1c26f5e733dcb5e7712c4" error="ttrpc: closed: unknown" Jan 13 21:53:15.433208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce3ca1a0ed1969ddcdeb1edd9649ee0533d7364c9fc1c26f5e733dcb5e7712c4-rootfs.mount: Deactivated successfully. Jan 13 21:53:15.438428 containerd[1580]: time="2025-01-13T21:53:15.438309661Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:53:15.477651 containerd[1580]: time="2025-01-13T21:53:15.476544226Z" level=info msg="CreateContainer within sandbox \"e7329a2d1df98a1d18cad1292d1c4fcccad15a5ec60a5f8f8ad2687137ef503d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1553cd3207402c61b1ffe870df93e6cc48ff6f4a8c693f2d25a131116bb12e01\"" Jan 13 21:53:15.478497 containerd[1580]: time="2025-01-13T21:53:15.478448131Z" level=info msg="StartContainer for \"1553cd3207402c61b1ffe870df93e6cc48ff6f4a8c693f2d25a131116bb12e01\"" Jan 13 21:53:15.546269 containerd[1580]: time="2025-01-13T21:53:15.544958177Z" level=info msg="StartContainer for \"1553cd3207402c61b1ffe870df93e6cc48ff6f4a8c693f2d25a131116bb12e01\" returns successfully" Jan 13 21:53:15.881194 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:53:15.932134 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 13 21:53:16.432874 systemd[1]: run-containerd-runc-k8s.io-1553cd3207402c61b1ffe870df93e6cc48ff6f4a8c693f2d25a131116bb12e01-runc.sRYl3c.mount: Deactivated successfully. Jan 13 21:53:16.486821 kubelet[2844]: I0113 21:53:16.486734 2844 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vqdpn" podStartSLOduration=5.486642677 podStartE2EDuration="5.486642677s" podCreationTimestamp="2025-01-13 21:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:53:16.48298607 +0000 UTC m=+162.885135390" watchObservedRunningTime="2025-01-13 21:53:16.486642677 +0000 UTC m=+162.888791977" Jan 13 21:53:17.296235 systemd[1]: run-containerd-runc-k8s.io-1553cd3207402c61b1ffe870df93e6cc48ff6f4a8c693f2d25a131116bb12e01-runc.7IIoBZ.mount: Deactivated successfully. Jan 13 21:53:17.630180 kubelet[2844]: I0113 21:53:17.629796 2844 setters.go:568] "Node became not ready" node="ci-4081-3-0-d-9566454817.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:53:17Z","lastTransitionTime":"2025-01-13T21:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:53:19.208526 systemd-networkd[1210]: lxc_health: Link UP Jan 13 21:53:19.210492 systemd-networkd[1210]: lxc_health: Gained carrier Jan 13 21:53:19.496788 systemd[1]: run-containerd-runc-k8s.io-1553cd3207402c61b1ffe870df93e6cc48ff6f4a8c693f2d25a131116bb12e01-runc.UJJRML.mount: Deactivated successfully. Jan 13 21:53:19.612141 kubelet[2844]: E0113 21:53:19.610417 2844 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36930->127.0.0.1:42349: write tcp 127.0.0.1:36930->127.0.0.1:42349: write: connection reset by peer Jan 13 21:53:21.227479 systemd-networkd[1210]: lxc_health: Gained IPv6LL Jan 13 21:53:23.968957 systemd[1]: run-containerd-runc-k8s.io-1553cd3207402c61b1ffe870df93e6cc48ff6f4a8c693f2d25a131116bb12e01-runc.fhkYc3.mount: Deactivated successfully. Jan 13 21:53:24.057974 kubelet[2844]: E0113 21:53:24.057930 2844 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:60828->127.0.0.1:42349: read: connection reset by peer Jan 13 21:53:26.418976 sshd[4767]: pam_unix(sshd:session): session closed for user core Jan 13 21:53:26.422513 systemd-logind[1560]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:53:26.423117 systemd[1]: sshd@25-172.24.4.62:22-172.24.4.1:43262.service: Deactivated successfully. Jan 13 21:53:26.427545 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:53:26.428735 systemd-logind[1560]: Removed session 28.