Jan 13 21:12:24.085991 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 21:12:24.086017 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:12:24.086027 kernel: BIOS-provided physical RAM map: Jan 13 21:12:24.086035 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:12:24.086042 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:12:24.086051 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:12:24.086060 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 21:12:24.086067 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 21:12:24.086074 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:12:24.086082 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:12:24.086089 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 21:12:24.086096 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:12:24.086103 kernel: NX (Execute Disable) protection: active Jan 13 21:12:24.086113 kernel: APIC: Static calls initialized Jan 13 21:12:24.086121 kernel: SMBIOS 3.0.0 present. Jan 13 21:12:24.086129 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 21:12:24.086137 kernel: Hypervisor detected: KVM Jan 13 21:12:24.086145 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:12:24.086152 kernel: kvm-clock: using sched offset of 3485285888 cycles Jan 13 21:12:24.086162 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:12:24.086170 kernel: tsc: Detected 1996.249 MHz processor Jan 13 21:12:24.086178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:12:24.086186 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:12:24.086194 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 21:12:24.086203 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:12:24.086211 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:12:24.087849 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 21:12:24.087869 kernel: ACPI: Early table checksum verification disabled Jan 13 21:12:24.087883 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 21:12:24.087890 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:12:24.087899 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:12:24.087906 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:12:24.087914 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 21:12:24.087922 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:12:24.087930 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:12:24.087938 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 21:12:24.087948 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 21:12:24.087956 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 21:12:24.087964 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 21:12:24.087972 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 21:12:24.087983 kernel: No NUMA configuration found Jan 13 21:12:24.087992 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 21:12:24.088000 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 21:12:24.088010 kernel: Zone ranges: Jan 13 21:12:24.088019 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:12:24.088027 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:12:24.088036 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:12:24.088044 kernel: Movable zone start for each node Jan 13 21:12:24.088052 kernel: Early memory node ranges Jan 13 21:12:24.088060 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:12:24.088068 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 21:12:24.088079 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:12:24.088087 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 21:12:24.088096 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:12:24.088104 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:12:24.088112 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 21:12:24.088120 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:12:24.088129 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:12:24.088137 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:12:24.088145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:12:24.088156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:12:24.088164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:12:24.088173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:12:24.088181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:12:24.088189 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:12:24.088198 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:12:24.088206 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:12:24.088214 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 21:12:24.088241 kernel: Booting paravirtualized kernel on KVM Jan 13 21:12:24.088253 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:12:24.088261 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:12:24.088270 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:12:24.088278 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:12:24.088286 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:12:24.088295 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 21:12:24.088304 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:12:24.088313 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:12:24.088323 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:12:24.088332 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:12:24.088340 kernel: Fallback order for Node 0: 0 Jan 13 21:12:24.088349 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 21:12:24.088357 kernel: Policy zone: Normal Jan 13 21:12:24.088365 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:12:24.088373 kernel: software IO TLB: area num 2. Jan 13 21:12:24.088382 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 227308K reserved, 0K cma-reserved) Jan 13 21:12:24.088390 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:12:24.088400 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 21:12:24.088409 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:12:24.088417 kernel: Dynamic Preempt: voluntary Jan 13 21:12:24.088425 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:12:24.088435 kernel: rcu: RCU event tracing is enabled. Jan 13 21:12:24.088443 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:12:24.088452 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:12:24.088460 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:12:24.088468 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:12:24.088478 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:12:24.088487 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:12:24.088495 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:12:24.088503 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:12:24.088511 kernel: Console: colour VGA+ 80x25 Jan 13 21:12:24.088519 kernel: printk: console [tty0] enabled Jan 13 21:12:24.088527 kernel: printk: console [ttyS0] enabled Jan 13 21:12:24.088536 kernel: ACPI: Core revision 20230628 Jan 13 21:12:24.088544 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:12:24.088552 kernel: x2apic enabled Jan 13 21:12:24.088562 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:12:24.088570 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:12:24.088579 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:12:24.088587 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 21:12:24.088595 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 21:12:24.088604 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 21:12:24.088612 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:12:24.088620 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:12:24.088629 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:12:24.088639 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:12:24.088647 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:12:24.088656 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 21:12:24.088664 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:12:24.088678 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:12:24.088688 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:12:24.088697 kernel: landlock: Up and running. Jan 13 21:12:24.088705 kernel: SELinux: Initializing. Jan 13 21:12:24.088714 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:12:24.088723 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:12:24.088731 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 21:12:24.088743 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:12:24.088752 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:12:24.088761 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:12:24.088770 kernel: Performance Events: AMD PMU driver. Jan 13 21:12:24.088778 kernel: ... version: 0 Jan 13 21:12:24.088790 kernel: ... bit width: 48 Jan 13 21:12:24.088798 kernel: ... generic registers: 4 Jan 13 21:12:24.088807 kernel: ... value mask: 0000ffffffffffff Jan 13 21:12:24.088816 kernel: ... max period: 00007fffffffffff Jan 13 21:12:24.088824 kernel: ... fixed-purpose events: 0 Jan 13 21:12:24.088833 kernel: ... event mask: 000000000000000f Jan 13 21:12:24.088842 kernel: signal: max sigframe size: 1440 Jan 13 21:12:24.088851 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:12:24.088860 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:12:24.088870 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:12:24.088878 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:12:24.088887 kernel: .... node #0, CPUs: #1 Jan 13 21:12:24.088895 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:12:24.088904 kernel: smpboot: Max logical packages: 2 Jan 13 21:12:24.088913 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 21:12:24.088922 kernel: devtmpfs: initialized Jan 13 21:12:24.088931 kernel: x86/mm: Memory block size: 128MB Jan 13 21:12:24.088940 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:12:24.088951 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:12:24.088959 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:12:24.088968 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:12:24.088977 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:12:24.088986 kernel: audit: type=2000 audit(1736802742.957:1): state=initialized audit_enabled=0 res=1 Jan 13 21:12:24.088994 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:12:24.089003 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:12:24.089012 kernel: cpuidle: using governor menu Jan 13 21:12:24.089020 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:12:24.089031 kernel: dca service started, version 1.12.1 Jan 13 21:12:24.089039 kernel: PCI: Using configuration type 1 for base access Jan 13 21:12:24.089049 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:12:24.089057 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:12:24.089066 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:12:24.089075 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:12:24.089084 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:12:24.089093 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:12:24.089101 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:12:24.089111 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:12:24.089120 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:12:24.089129 kernel: ACPI: Interpreter enabled Jan 13 21:12:24.089138 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:12:24.089146 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:12:24.089155 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:12:24.089164 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:12:24.089173 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 21:12:24.089181 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:12:24.091401 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:12:24.091511 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:12:24.091603 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:12:24.091617 kernel: acpiphp: Slot [3] registered Jan 13 21:12:24.091626 kernel: acpiphp: Slot [4] registered Jan 13 21:12:24.091635 kernel: acpiphp: Slot [5] registered Jan 13 21:12:24.091643 kernel: acpiphp: Slot [6] registered Jan 13 21:12:24.091652 kernel: acpiphp: Slot [7] registered Jan 13 21:12:24.091664 kernel: acpiphp: Slot [8] registered Jan 13 21:12:24.091672 kernel: acpiphp: Slot [9] registered Jan 13 21:12:24.091681 kernel: acpiphp: Slot [10] registered Jan 13 21:12:24.091690 kernel: acpiphp: Slot [11] registered Jan 13 21:12:24.091698 kernel: acpiphp: Slot [12] registered Jan 13 21:12:24.091707 kernel: acpiphp: Slot [13] registered Jan 13 21:12:24.091716 kernel: acpiphp: Slot [14] registered Jan 13 21:12:24.091724 kernel: acpiphp: Slot [15] registered Jan 13 21:12:24.091733 kernel: acpiphp: Slot [16] registered Jan 13 21:12:24.091743 kernel: acpiphp: Slot [17] registered Jan 13 21:12:24.091752 kernel: acpiphp: Slot [18] registered Jan 13 21:12:24.091760 kernel: acpiphp: Slot [19] registered Jan 13 21:12:24.091769 kernel: acpiphp: Slot [20] registered Jan 13 21:12:24.091777 kernel: acpiphp: Slot [21] registered Jan 13 21:12:24.091786 kernel: acpiphp: Slot [22] registered Jan 13 21:12:24.091794 kernel: acpiphp: Slot [23] registered Jan 13 21:12:24.091803 kernel: acpiphp: Slot [24] registered Jan 13 21:12:24.091811 kernel: acpiphp: Slot [25] registered Jan 13 21:12:24.091820 kernel: acpiphp: Slot [26] registered Jan 13 21:12:24.091830 kernel: acpiphp: Slot [27] registered Jan 13 21:12:24.091839 kernel: acpiphp: Slot [28] registered Jan 13 21:12:24.091847 kernel: acpiphp: Slot [29] registered Jan 13 21:12:24.091856 kernel: acpiphp: Slot [30] registered Jan 13 21:12:24.091865 kernel: acpiphp: Slot [31] registered Jan 13 21:12:24.091873 kernel: PCI host bridge to bus 0000:00 Jan 13 21:12:24.091999 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:12:24.092085 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:12:24.092262 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:12:24.092377 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:12:24.092463 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 21:12:24.092544 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:12:24.092657 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:12:24.092757 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:12:24.092868 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 21:12:24.092961 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 21:12:24.093053 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 21:12:24.093145 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 21:12:24.094271 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 21:12:24.094369 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 21:12:24.094467 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:12:24.094563 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 21:12:24.094653 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 21:12:24.097198 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 21:12:24.098619 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 21:12:24.099006 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 21:12:24.099206 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 21:12:24.100345 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 21:12:24.100448 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:12:24.100547 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:12:24.100642 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 21:12:24.100734 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 21:12:24.100822 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 21:12:24.100913 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 21:12:24.101010 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:12:24.101106 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:12:24.101197 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 21:12:24.102094 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 21:12:24.102249 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 21:12:24.102353 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 21:12:24.102448 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 21:12:24.102551 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:12:24.102654 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 21:12:24.102760 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 21:12:24.102852 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 21:12:24.102866 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:12:24.102876 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:12:24.102885 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:12:24.102895 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:12:24.102907 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:12:24.102916 kernel: iommu: Default domain type: Translated Jan 13 21:12:24.102925 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:12:24.102934 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:12:24.102943 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:12:24.102952 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:12:24.102961 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 21:12:24.103054 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 21:12:24.103146 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 21:12:24.103266 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:12:24.103280 kernel: vgaarb: loaded Jan 13 21:12:24.103289 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:12:24.103299 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:12:24.103308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:12:24.103317 kernel: pnp: PnP ACPI init Jan 13 21:12:24.103414 kernel: pnp 00:03: [dma 2] Jan 13 21:12:24.103429 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:12:24.103438 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:12:24.103451 kernel: NET: Registered PF_INET protocol family Jan 13 21:12:24.103460 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:12:24.103469 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:12:24.103478 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:12:24.103487 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:12:24.103496 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:12:24.103505 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:12:24.103514 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:12:24.103525 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:12:24.103534 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:12:24.103543 kernel: NET: Registered PF_XDP protocol family Jan 13 21:12:24.103631 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:12:24.103714 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:12:24.103793 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:12:24.103873 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 21:12:24.103951 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 21:12:24.104047 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 21:12:24.104148 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:12:24.104163 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:12:24.104172 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:12:24.104181 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 21:12:24.104190 kernel: Initialise system trusted keyrings Jan 13 21:12:24.104199 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:12:24.104208 kernel: Key type asymmetric registered Jan 13 21:12:24.104217 kernel: Asymmetric key parser 'x509' registered Jan 13 21:12:24.104285 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:12:24.104295 kernel: io scheduler mq-deadline registered Jan 13 21:12:24.104304 kernel: io scheduler kyber registered Jan 13 21:12:24.104313 kernel: io scheduler bfq registered Jan 13 21:12:24.104322 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:12:24.104331 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 21:12:24.104340 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:12:24.104349 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:12:24.104358 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:12:24.104369 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:12:24.104377 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:12:24.104386 kernel: random: crng init done Jan 13 21:12:24.104396 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:12:24.104404 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:12:24.104413 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:12:24.104517 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:12:24.104532 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:12:24.104617 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:12:24.104703 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:12:23 UTC (1736802743) Jan 13 21:12:24.104786 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 21:12:24.104800 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:12:24.104810 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:12:24.104819 kernel: Segment Routing with IPv6 Jan 13 21:12:24.104828 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:12:24.104836 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:12:24.104845 kernel: Key type dns_resolver registered Jan 13 21:12:24.104857 kernel: IPI shorthand broadcast: enabled Jan 13 21:12:24.104866 kernel: sched_clock: Marking stable (1012007657, 177607230)->(1232291402, -42676515) Jan 13 21:12:24.104875 kernel: registered taskstats version 1 Jan 13 21:12:24.104884 kernel: Loading compiled-in X.509 certificates Jan 13 21:12:24.104893 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 21:12:24.104901 kernel: Key type .fscrypt registered Jan 13 21:12:24.104910 kernel: Key type fscrypt-provisioning registered Jan 13 21:12:24.104919 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:12:24.104930 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:12:24.104939 kernel: ima: No architecture policies found Jan 13 21:12:24.104947 kernel: clk: Disabling unused clocks Jan 13 21:12:24.104956 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 21:12:24.104965 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:12:24.104974 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 21:12:24.104982 kernel: Run /init as init process Jan 13 21:12:24.104991 kernel: with arguments: Jan 13 21:12:24.105000 kernel: /init Jan 13 21:12:24.105008 kernel: with environment: Jan 13 21:12:24.105019 kernel: HOME=/ Jan 13 21:12:24.105028 kernel: TERM=linux Jan 13 21:12:24.105036 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:12:24.105048 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:12:24.105060 systemd[1]: Detected virtualization kvm. Jan 13 21:12:24.105070 systemd[1]: Detected architecture x86-64. Jan 13 21:12:24.105080 systemd[1]: Running in initrd. Jan 13 21:12:24.105091 systemd[1]: No hostname configured, using default hostname. Jan 13 21:12:24.105100 systemd[1]: Hostname set to . Jan 13 21:12:24.105110 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:12:24.105119 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:12:24.105128 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:12:24.105138 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:12:24.105149 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:12:24.105168 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:12:24.105180 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:12:24.105190 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:12:24.105201 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:12:24.105211 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:12:24.105243 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:12:24.105253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:12:24.105262 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:12:24.105272 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:12:24.105282 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:12:24.105292 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:12:24.105301 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:12:24.105311 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:12:24.105321 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:12:24.105334 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:12:24.105344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:12:24.105353 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:12:24.105363 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:12:24.105373 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:12:24.105383 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:12:24.105393 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:12:24.105403 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:12:24.105412 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:12:24.105424 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:12:24.105433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:12:24.105443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:24.105453 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:12:24.105463 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:12:24.105473 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:12:24.105504 systemd-journald[185]: Collecting audit messages is disabled. Jan 13 21:12:24.105530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:12:24.105542 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:12:24.105553 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:12:24.105562 kernel: Bridge firewalling registered Jan 13 21:12:24.105572 systemd-journald[185]: Journal started Jan 13 21:12:24.105596 systemd-journald[185]: Runtime Journal (/run/log/journal/eb085149d8314ca895810f201b310fa6) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:12:24.055757 systemd-modules-load[186]: Inserted module 'overlay' Jan 13 21:12:24.143344 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:12:24.101464 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 13 21:12:24.144591 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:12:24.146019 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:24.158502 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:12:24.161460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:24.168756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:12:24.178391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:12:24.179275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:24.187570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:24.194520 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:12:24.198195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:12:24.209102 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:12:24.217599 dracut-cmdline[216]: dracut-dracut-053 Jan 13 21:12:24.219424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:12:24.222181 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 21:12:24.249569 systemd-resolved[228]: Positive Trust Anchors: Jan 13 21:12:24.250288 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:12:24.250331 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:12:24.256563 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 13 21:12:24.257757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:12:24.258331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:12:24.293279 kernel: SCSI subsystem initialized Jan 13 21:12:24.303359 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:12:24.316296 kernel: iscsi: registered transport (tcp) Jan 13 21:12:24.338401 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:12:24.338482 kernel: QLogic iSCSI HBA Driver Jan 13 21:12:24.400238 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:12:24.404534 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:12:24.458828 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:12:24.458952 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:12:24.460836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:12:24.507304 kernel: raid6: sse2x4 gen() 12938 MB/s Jan 13 21:12:24.525294 kernel: raid6: sse2x2 gen() 14523 MB/s Jan 13 21:12:24.543636 kernel: raid6: sse2x1 gen() 9905 MB/s Jan 13 21:12:24.543706 kernel: raid6: using algorithm sse2x2 gen() 14523 MB/s Jan 13 21:12:24.562707 kernel: raid6: .... xor() 9385 MB/s, rmw enabled Jan 13 21:12:24.562788 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 21:12:24.584569 kernel: xor: measuring software checksum speed Jan 13 21:12:24.584643 kernel: prefetch64-sse : 18494 MB/sec Jan 13 21:12:24.587992 kernel: generic_sse : 15296 MB/sec Jan 13 21:12:24.588052 kernel: xor: using function: prefetch64-sse (18494 MB/sec) Jan 13 21:12:24.779344 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:12:24.796973 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:12:24.803575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:12:24.816270 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 13 21:12:24.820596 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:12:24.831567 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:12:24.847677 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 13 21:12:24.893840 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:12:24.899546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:12:24.957326 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:12:24.965386 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:12:24.981168 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:12:24.985997 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:12:24.986604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:12:24.988552 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:12:24.997432 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:12:25.016245 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:12:25.043260 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 21:12:25.073439 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 21:12:25.073580 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:12:25.073595 kernel: GPT:17805311 != 20971519 Jan 13 21:12:25.073607 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:12:25.073619 kernel: GPT:17805311 != 20971519 Jan 13 21:12:25.073629 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:12:25.073640 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:12:25.053540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:12:25.053676 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:25.056145 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:12:25.057058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:12:25.057207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:25.059297 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:25.073569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:25.084364 kernel: libata version 3.00 loaded. Jan 13 21:12:25.092347 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 21:12:25.101762 kernel: scsi host0: ata_piix Jan 13 21:12:25.101888 kernel: scsi host1: ata_piix Jan 13 21:12:25.102013 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 21:12:25.102026 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 21:12:25.119258 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (453) Jan 13 21:12:25.125240 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (456) Jan 13 21:12:25.132148 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:12:25.161753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:25.171992 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:12:25.177653 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:12:25.182191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:12:25.182781 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:12:25.197414 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:12:25.200154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:12:25.224127 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:12:25.224194 disk-uuid[505]: Primary Header is updated. Jan 13 21:12:25.224194 disk-uuid[505]: Secondary Entries is updated. Jan 13 21:12:25.224194 disk-uuid[505]: Secondary Header is updated. Jan 13 21:12:25.228964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:26.249333 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:12:26.249436 disk-uuid[514]: The operation has completed successfully. Jan 13 21:12:26.331022 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:12:26.331144 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:12:26.357374 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:12:26.361264 sh[525]: Success Jan 13 21:12:26.378299 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 21:12:26.446921 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:12:26.456662 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:12:26.458960 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:12:26.498282 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 21:12:26.498401 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:12:26.498432 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:12:26.504514 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:12:26.508287 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:12:26.529799 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:12:26.532200 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:12:26.540572 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:12:26.546521 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:12:26.567908 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:12:26.567990 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:12:26.572073 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:12:26.584288 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:12:26.606305 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:12:26.614542 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:12:26.620034 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:12:26.628504 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:12:26.704287 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:12:26.713417 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:12:26.733865 systemd-networkd[708]: lo: Link UP Jan 13 21:12:26.733876 systemd-networkd[708]: lo: Gained carrier Jan 13 21:12:26.735035 systemd-networkd[708]: Enumeration completed Jan 13 21:12:26.735271 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:12:26.735671 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:26.735675 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:12:26.735930 systemd[1]: Reached target network.target - Network. Jan 13 21:12:26.737022 systemd-networkd[708]: eth0: Link UP Jan 13 21:12:26.737026 systemd-networkd[708]: eth0: Gained carrier Jan 13 21:12:26.737034 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:26.749287 systemd-networkd[708]: eth0: DHCPv4 address 172.24.4.27/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:12:26.779479 ignition[634]: Ignition 2.20.0 Jan 13 21:12:26.779491 ignition[634]: Stage: fetch-offline Jan 13 21:12:26.781512 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:12:26.779530 ignition[634]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:26.779541 ignition[634]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:12:26.779635 ignition[634]: parsed url from cmdline: "" Jan 13 21:12:26.779640 ignition[634]: no config URL provided Jan 13 21:12:26.779645 ignition[634]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:12:26.779654 ignition[634]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:12:26.779660 ignition[634]: failed to fetch config: resource requires networking Jan 13 21:12:26.780086 ignition[634]: Ignition finished successfully Jan 13 21:12:26.792393 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:12:26.805442 ignition[717]: Ignition 2.20.0 Jan 13 21:12:26.805456 ignition[717]: Stage: fetch Jan 13 21:12:26.805655 ignition[717]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:26.805667 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:12:26.805772 ignition[717]: parsed url from cmdline: "" Jan 13 21:12:26.805776 ignition[717]: no config URL provided Jan 13 21:12:26.805782 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:12:26.805794 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:12:26.805881 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 21:12:26.805966 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 21:12:26.805999 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 21:12:26.983969 ignition[717]: GET result: OK Jan 13 21:12:26.984139 ignition[717]: parsing config with SHA512: 02a5c017fa3ae36aec179ea8183b453ce9651c02f58faba5dce92a55fb2f27d8ec2ec742d6ed5ca1bc2cfa90d54b528b9254d2ec12cc016096f64b4e29d11b41 Jan 13 21:12:26.995399 unknown[717]: fetched base config from "system" Jan 13 21:12:26.995427 unknown[717]: fetched base config from "system" Jan 13 21:12:26.996344 ignition[717]: fetch: fetch complete Jan 13 21:12:26.995441 unknown[717]: fetched user config from "openstack" Jan 13 21:12:26.996356 ignition[717]: fetch: fetch passed Jan 13 21:12:27.000760 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:12:26.996448 ignition[717]: Ignition finished successfully Jan 13 21:12:27.009679 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:12:27.044360 ignition[724]: Ignition 2.20.0 Jan 13 21:12:27.044385 ignition[724]: Stage: kargs Jan 13 21:12:27.044782 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:27.044809 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:12:27.049652 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:12:27.047155 ignition[724]: kargs: kargs passed Jan 13 21:12:27.047311 ignition[724]: Ignition finished successfully Jan 13 21:12:27.059585 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:12:27.098007 ignition[730]: Ignition 2.20.0 Jan 13 21:12:27.099875 ignition[730]: Stage: disks Jan 13 21:12:27.101544 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:27.102425 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:12:27.105337 ignition[730]: disks: disks passed Jan 13 21:12:27.105515 ignition[730]: Ignition finished successfully Jan 13 21:12:27.107705 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:12:27.111040 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:12:27.112546 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:12:27.115699 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:12:27.118772 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:12:27.121384 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:12:27.129646 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:12:27.171284 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:12:27.180886 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:12:27.190502 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:12:27.352270 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 21:12:27.352623 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:12:27.353606 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:12:27.360455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:12:27.363897 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:12:27.366938 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:12:27.373817 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (746) Jan 13 21:12:27.373886 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:12:27.376441 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:12:27.379052 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:12:27.379168 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 21:12:27.380008 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:12:27.393355 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:12:27.380037 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:12:27.392064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:12:27.396370 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:12:27.418994 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:12:27.521928 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:12:27.530166 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:12:27.535380 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:12:27.539976 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:12:27.658386 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:12:27.666376 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:12:27.673382 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:12:27.680453 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:12:27.684722 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:12:27.712253 ignition[863]: INFO : Ignition 2.20.0 Jan 13 21:12:27.712253 ignition[863]: INFO : Stage: mount Jan 13 21:12:27.712253 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:27.712253 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:12:27.716322 ignition[863]: INFO : mount: mount passed Jan 13 21:12:27.716322 ignition[863]: INFO : Ignition finished successfully Jan 13 21:12:27.719672 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:12:27.730370 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:12:28.180901 systemd-networkd[708]: eth0: Gained IPv6LL Jan 13 21:12:34.605492 coreos-metadata[748]: Jan 13 21:12:34.605 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:12:34.647126 coreos-metadata[748]: Jan 13 21:12:34.647 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:12:34.662709 coreos-metadata[748]: Jan 13 21:12:34.662 INFO Fetch successful Jan 13 21:12:34.664254 coreos-metadata[748]: Jan 13 21:12:34.663 INFO wrote hostname ci-4152-2-0-a-cb16eea878.novalocal to /sysroot/etc/hostname Jan 13 21:12:34.666818 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 21:12:34.667056 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 21:12:34.679483 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:12:34.704550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:12:34.735332 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (881) Jan 13 21:12:34.737273 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 21:12:34.742529 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:12:34.746944 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:12:34.758306 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:12:34.763976 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:12:34.814164 ignition[899]: INFO : Ignition 2.20.0 Jan 13 21:12:34.817394 ignition[899]: INFO : Stage: files Jan 13 21:12:34.817394 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:34.817394 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:12:34.823149 ignition[899]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:12:34.824696 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:12:34.824696 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:12:34.830695 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:12:34.831805 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:12:34.831805 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:12:34.831195 unknown[899]: wrote ssh authorized keys file for user: core Jan 13 21:12:34.834755 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:12:34.834755 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:12:34.922416 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:12:35.227277 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:12:35.227277 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:12:35.227277 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:12:35.757435 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:12:36.192767 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:12:36.670108 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:12:38.460096 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:12:38.460096 ignition[899]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:12:38.485285 ignition[899]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:12:38.487911 ignition[899]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:12:38.487911 ignition[899]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:12:38.487911 ignition[899]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:12:38.487911 ignition[899]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:12:38.487911 ignition[899]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:12:38.487911 ignition[899]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:12:38.487911 ignition[899]: INFO : files: files passed Jan 13 21:12:38.487911 ignition[899]: INFO : Ignition finished successfully Jan 13 21:12:38.487995 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:12:38.501467 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:12:38.506403 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:12:38.521989 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:12:38.523780 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:12:38.555138 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:38.555138 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:38.562639 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:38.563587 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:12:38.566608 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:12:38.582656 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:12:38.644592 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:12:38.644880 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:12:38.648713 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:12:38.651046 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:12:38.654096 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:12:38.661525 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:12:38.698182 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:12:38.708522 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:12:38.745734 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:12:38.747607 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:12:38.750871 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:12:38.753704 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:12:38.753987 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:12:38.757173 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:12:38.759122 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:12:38.762029 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:12:38.764723 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:12:38.767362 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:12:38.770359 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:12:38.773347 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:12:38.776493 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:12:38.779020 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:12:38.780950 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:12:38.782658 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:12:38.782996 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:12:38.785118 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:12:38.787115 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:12:38.789035 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:12:38.789321 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:12:38.791168 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:12:38.791495 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:12:38.794117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:12:38.794468 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:12:38.796403 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:12:38.796669 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:12:38.805660 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:12:38.806806 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:12:38.807473 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:12:38.812434 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:12:38.812942 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:12:38.813075 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:12:38.813726 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:12:38.813844 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:12:38.826688 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:12:38.827268 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:12:38.833564 ignition[951]: INFO : Ignition 2.20.0 Jan 13 21:12:38.833564 ignition[951]: INFO : Stage: umount Jan 13 21:12:38.836013 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:38.836013 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:12:38.836013 ignition[951]: INFO : umount: umount passed Jan 13 21:12:38.836013 ignition[951]: INFO : Ignition finished successfully Jan 13 21:12:38.836926 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:12:38.837363 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:12:38.838320 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:12:38.838375 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:12:38.840893 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:12:38.840946 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:12:38.841715 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:12:38.841765 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:12:38.844348 systemd[1]: Stopped target network.target - Network. Jan 13 21:12:38.844922 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:12:38.844980 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:12:38.845602 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:12:38.846104 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:12:38.847591 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:12:38.850446 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:12:38.851563 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:12:38.852942 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:12:38.852994 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:12:38.854271 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:12:38.854327 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:12:38.855743 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:12:38.855827 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:12:38.857707 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:12:38.857772 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:12:38.859339 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:12:38.860905 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:12:38.863751 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:12:38.864494 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:12:38.864598 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:12:38.866279 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:12:38.866382 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:12:38.866424 systemd-networkd[708]: eth0: DHCPv6 lease lost Jan 13 21:12:38.868499 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:12:38.868625 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:12:38.870602 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:12:38.870655 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:12:38.878374 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:12:38.884124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:12:38.884300 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:12:38.886382 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:12:38.888100 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:12:38.888215 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:12:38.895666 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:12:38.896473 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:12:38.898758 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:12:38.898875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:12:38.901531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:12:38.901590 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:12:38.902871 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:12:38.902907 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:12:38.904164 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:12:38.904208 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:12:38.905875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:12:38.905920 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:12:38.906981 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:12:38.907027 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:38.915432 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:12:38.916653 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:12:38.916738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:38.917350 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:12:38.917395 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:12:38.917919 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:12:38.917962 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:12:38.920384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:12:38.920429 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:12:38.923573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:12:38.923674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:38.925239 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:12:38.925360 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:12:38.926507 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:12:38.933410 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:12:38.941604 systemd[1]: Switching root. Jan 13 21:12:38.970438 systemd-journald[185]: Journal stopped Jan 13 21:12:40.638953 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jan 13 21:12:40.639011 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:12:40.639028 kernel: SELinux: policy capability open_perms=1 Jan 13 21:12:40.639040 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:12:40.639051 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:12:40.639062 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:12:40.639077 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:12:40.639088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:12:40.639099 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:12:40.639111 kernel: audit: type=1403 audit(1736802759.577:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:12:40.639123 systemd[1]: Successfully loaded SELinux policy in 84.599ms. Jan 13 21:12:40.639140 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.379ms. Jan 13 21:12:40.639153 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:12:40.639169 systemd[1]: Detected virtualization kvm. Jan 13 21:12:40.639184 systemd[1]: Detected architecture x86-64. Jan 13 21:12:40.639196 systemd[1]: Detected first boot. Jan 13 21:12:40.639208 systemd[1]: Hostname set to . Jan 13 21:12:40.639236 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:12:40.640287 zram_generator::config[996]: No configuration found. Jan 13 21:12:40.640305 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:12:40.640317 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:12:40.640330 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:12:40.640342 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:12:40.640359 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:12:40.640372 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:12:40.640384 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:12:40.640396 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:12:40.640408 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:12:40.640420 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:12:40.640432 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:12:40.640444 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:12:40.640459 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:12:40.640471 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:12:40.640483 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:12:40.640497 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:12:40.640509 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:12:40.640521 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:12:40.640533 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:12:40.640545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:12:40.640557 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:12:40.640572 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:12:40.640585 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:12:40.640597 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:12:40.640609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:12:40.640624 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:12:40.640636 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:12:40.640650 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:12:40.640661 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:12:40.640676 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:12:40.640688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:12:40.640701 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:12:40.640713 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:12:40.640725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:12:40.640737 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:12:40.640749 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:12:40.640762 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:12:40.640777 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:12:40.640789 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:12:40.640801 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:12:40.640813 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:12:40.640825 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:12:40.640838 systemd[1]: Reached target machines.target - Containers. Jan 13 21:12:40.640850 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:12:40.640862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:12:40.640876 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:12:40.640888 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:12:40.640900 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:12:40.640912 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:12:40.640924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:12:40.640936 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:12:40.640948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:12:40.640961 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:12:40.640975 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:12:40.640988 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:12:40.641000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:12:40.641011 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:12:40.641025 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:12:40.641037 kernel: fuse: init (API version 7.39) Jan 13 21:12:40.641049 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:12:40.641061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:12:40.641073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:12:40.641087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:12:40.641100 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:12:40.641112 systemd[1]: Stopped verity-setup.service. Jan 13 21:12:40.641124 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:12:40.641154 systemd-journald[1093]: Collecting audit messages is disabled. Jan 13 21:12:40.641181 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:12:40.641193 kernel: ACPI: bus type drm_connector registered Jan 13 21:12:40.641204 kernel: loop: module loaded Jan 13 21:12:40.641217 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:12:40.643371 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:12:40.643390 systemd-journald[1093]: Journal started Jan 13 21:12:40.643427 systemd-journald[1093]: Runtime Journal (/run/log/journal/eb085149d8314ca895810f201b310fa6) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:12:40.268492 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:12:40.290910 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:12:40.291316 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:12:40.647261 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:12:40.647596 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:12:40.648249 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:12:40.648853 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:12:40.649583 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:12:40.650374 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:12:40.651171 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:12:40.651404 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:12:40.652183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:12:40.652435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:12:40.653195 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:12:40.653641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:12:40.654394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:12:40.654560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:12:40.655401 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:12:40.655581 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:12:40.656352 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:12:40.656514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:12:40.657359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:12:40.658087 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:12:40.658862 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:12:40.670539 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:12:40.676970 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:12:40.681455 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:12:40.682174 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:12:40.682299 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:12:40.684133 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:12:40.688390 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:12:40.690343 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:12:40.693019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:12:40.702850 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:12:40.708455 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:12:40.709140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:12:40.713372 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:12:40.714093 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:12:40.717410 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:40.722467 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:12:40.733393 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:12:40.735683 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:12:40.737514 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:12:40.738156 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:12:40.739335 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:12:40.758462 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:12:40.765865 systemd-journald[1093]: Time spent on flushing to /var/log/journal/eb085149d8314ca895810f201b310fa6 is 58.867ms for 949 entries. Jan 13 21:12:40.765865 systemd-journald[1093]: System Journal (/var/log/journal/eb085149d8314ca895810f201b310fa6) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:12:40.853429 systemd-journald[1093]: Received client request to flush runtime journal. Jan 13 21:12:40.853471 kernel: loop0: detected capacity change from 0 to 210664 Jan 13 21:12:40.786499 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:12:40.788462 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:12:40.797385 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:12:40.798380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:40.801144 udevadm[1136]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:12:40.855167 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:12:40.911641 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:12:40.928299 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:12:40.931432 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:12:40.932844 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:12:40.933505 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:12:40.975933 kernel: loop1: detected capacity change from 0 to 8 Jan 13 21:12:41.005693 kernel: loop2: detected capacity change from 0 to 140992 Jan 13 21:12:41.006861 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Jan 13 21:12:41.007197 systemd-tmpfiles[1148]: ACLs are not supported, ignoring. Jan 13 21:12:41.014895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:12:41.112353 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 21:12:41.224480 kernel: loop4: detected capacity change from 0 to 210664 Jan 13 21:12:41.292308 kernel: loop5: detected capacity change from 0 to 8 Jan 13 21:12:41.302983 kernel: loop6: detected capacity change from 0 to 140992 Jan 13 21:12:41.354511 kernel: loop7: detected capacity change from 0 to 138184 Jan 13 21:12:41.410496 (sd-merge)[1156]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 21:12:41.410990 (sd-merge)[1156]: Merged extensions into '/usr'. Jan 13 21:12:41.415569 systemd[1]: Reloading requested from client PID 1129 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:12:41.415586 systemd[1]: Reloading... Jan 13 21:12:41.535263 zram_generator::config[1179]: No configuration found. Jan 13 21:12:41.768301 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:12:41.781060 ldconfig[1124]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:12:41.827294 systemd[1]: Reloading finished in 411 ms. Jan 13 21:12:41.856627 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:12:41.858998 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:12:41.861181 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:12:41.877530 systemd[1]: Starting ensure-sysext.service... Jan 13 21:12:41.882524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:12:41.895628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:12:41.905442 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:12:41.905479 systemd[1]: Reloading... Jan 13 21:12:41.926627 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Jan 13 21:12:41.946945 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:12:41.947325 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:12:41.948155 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:12:41.949529 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 21:12:41.949599 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jan 13 21:12:41.953864 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:12:41.953876 systemd-tmpfiles[1240]: Skipping /boot Jan 13 21:12:41.968406 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:12:41.968418 systemd-tmpfiles[1240]: Skipping /boot Jan 13 21:12:42.014243 zram_generator::config[1281]: No configuration found. Jan 13 21:12:42.112699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1282) Jan 13 21:12:42.170053 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:12:42.243521 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:12:42.243606 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 21:12:42.244982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:12:42.301268 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:12:42.313308 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:12:42.319134 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 21:12:42.319184 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 21:12:42.329795 kernel: Console: switching to colour dummy device 80x25 Jan 13 21:12:42.329870 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 21:12:42.329908 kernel: [drm] features: -context_init Jan 13 21:12:42.334324 kernel: [drm] number of scanouts: 1 Jan 13 21:12:42.338276 kernel: [drm] number of cap sets: 0 Jan 13 21:12:42.343327 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 21:12:42.352783 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 21:12:42.352884 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 21:12:42.348341 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:12:42.360889 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 21:12:42.363570 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:12:42.364089 systemd[1]: Reloading finished in 457 ms. Jan 13 21:12:42.379887 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:12:42.390615 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:12:42.411777 systemd[1]: Finished ensure-sysext.service. Jan 13 21:12:42.426434 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:12:42.431539 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:12:42.437903 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:12:42.439841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:12:42.447603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:12:42.452640 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:12:42.458540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:12:42.462552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:12:42.462950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:12:42.468545 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:12:42.472787 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:12:42.486511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:12:42.491488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:12:42.498416 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:12:42.506402 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:12:42.516794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:42.516888 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:12:42.517880 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:12:42.519153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:12:42.519744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:12:42.520483 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:12:42.521215 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:12:42.521544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:12:42.521662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:12:42.521919 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:12:42.522051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:12:42.526444 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:12:42.547416 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:12:42.550677 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:12:42.550784 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:12:42.554206 augenrules[1398]: No rules Jan 13 21:12:42.565828 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:12:42.566839 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:12:42.570380 lvm[1395]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:12:42.570035 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:12:42.576485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:12:42.595609 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:12:42.599188 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:12:42.616830 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:12:42.617715 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:12:42.627838 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:12:42.632393 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:12:42.639482 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:12:42.644643 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:12:42.647008 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:12:42.655319 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:12:42.680410 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:12:42.693669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:42.752118 systemd-networkd[1378]: lo: Link UP Jan 13 21:12:42.752128 systemd-networkd[1378]: lo: Gained carrier Jan 13 21:12:42.755632 systemd-networkd[1378]: Enumeration completed Jan 13 21:12:42.755744 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:12:42.756058 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:42.756062 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:12:42.757302 systemd-networkd[1378]: eth0: Link UP Jan 13 21:12:42.757306 systemd-networkd[1378]: eth0: Gained carrier Jan 13 21:12:42.757320 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:42.770456 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:12:42.772514 systemd-networkd[1378]: eth0: DHCPv4 address 172.24.4.27/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:12:42.774392 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:12:42.776597 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:12:42.780117 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Jan 13 21:12:42.783542 systemd-resolved[1379]: Positive Trust Anchors: Jan 13 21:12:42.783562 systemd-resolved[1379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:12:42.783606 systemd-resolved[1379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:12:42.789133 systemd-resolved[1379]: Using system hostname 'ci-4152-2-0-a-cb16eea878.novalocal'. Jan 13 21:12:42.790669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:12:42.792520 systemd[1]: Reached target network.target - Network. Jan 13 21:12:42.794608 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:12:42.796800 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:12:42.799107 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:12:42.801435 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:12:42.803816 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:12:42.806492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:12:42.808576 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:12:42.810835 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:12:42.810863 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:12:42.812927 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:12:42.815654 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:12:42.819374 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:12:42.827055 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:12:42.830738 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:12:42.833327 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:12:42.833985 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:12:42.836543 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:12:42.836582 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:12:42.842364 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:12:42.845465 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:12:42.849260 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:12:42.854418 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:12:42.858675 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:12:42.861486 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:12:42.866079 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:12:42.881410 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:12:42.889849 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:12:42.901428 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:12:42.909245 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:12:42.915372 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:12:42.916004 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:12:42.920984 dbus-daemon[1431]: [system] SELinux support is enabled Jan 13 21:12:42.924399 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:12:42.932282 jq[1432]: false Jan 13 21:12:43.336143 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:12:43.352943 extend-filesystems[1434]: Found loop4 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found loop5 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found loop6 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found loop7 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found vda Jan 13 21:12:43.352943 extend-filesystems[1434]: Found vda1 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found vda2 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found vda3 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found usr Jan 13 21:12:43.352943 extend-filesystems[1434]: Found vda4 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found vda6 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found vda7 Jan 13 21:12:43.352943 extend-filesystems[1434]: Found vda9 Jan 13 21:12:43.352943 extend-filesystems[1434]: Checking size of /dev/vda9 Jan 13 21:12:43.474092 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 21:12:43.474124 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 21:12:43.474145 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1263) Jan 13 21:12:43.337556 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:12:43.474276 jq[1446]: true Jan 13 21:12:43.474478 extend-filesystems[1434]: Resized partition /dev/vda9 Jan 13 21:12:43.342656 systemd-timesyncd[1380]: Contacted time server 212.227.232.161:123 (0.flatcar.pool.ntp.org). Jan 13 21:12:43.483662 update_engine[1442]: I20250113 21:12:43.423532 1442 main.cc:92] Flatcar Update Engine starting Jan 13 21:12:43.483662 update_engine[1442]: I20250113 21:12:43.430557 1442 update_check_scheduler.cc:74] Next update check in 7m9s Jan 13 21:12:43.484080 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:12:43.484080 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:12:43.484080 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:12:43.484080 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 21:12:43.342748 systemd-timesyncd[1380]: Initial clock synchronization to Mon 2025-01-13 21:12:43.335877 UTC. Jan 13 21:12:43.527865 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jan 13 21:12:43.344346 systemd-resolved[1379]: Clock change detected. Flushing caches. Jan 13 21:12:43.354800 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:12:43.546289 tar[1453]: linux-amd64/helm Jan 13 21:12:43.354993 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:12:43.547749 jq[1464]: true Jan 13 21:12:43.356678 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:12:43.356852 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:12:43.384733 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:12:43.384790 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:12:43.391962 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:12:43.391987 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:12:43.416595 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:12:43.418837 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:12:43.454817 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:12:43.458960 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:12:43.479880 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:12:43.480162 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:12:43.504296 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:12:43.548656 systemd-logind[1440]: New seat seat0. Jan 13 21:12:43.551161 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:12:43.551179 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:12:43.551369 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:12:43.664898 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:12:43.666914 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:12:43.687169 systemd[1]: Starting sshkeys.service... Jan 13 21:12:43.715264 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:12:43.729356 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:12:43.827352 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:12:43.991835 containerd[1465]: time="2025-01-13T21:12:43.991664781Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 21:12:44.070921 containerd[1465]: time="2025-01-13T21:12:44.069144515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:44.073965 containerd[1465]: time="2025-01-13T21:12:44.073934057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:44.074542 containerd[1465]: time="2025-01-13T21:12:44.074523373Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:12:44.074670 containerd[1465]: time="2025-01-13T21:12:44.074652555Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:12:44.075491 containerd[1465]: time="2025-01-13T21:12:44.075458266Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:12:44.075984 containerd[1465]: time="2025-01-13T21:12:44.075965658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:44.076168 containerd[1465]: time="2025-01-13T21:12:44.076145595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:44.076498 containerd[1465]: time="2025-01-13T21:12:44.076480774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:44.076966 containerd[1465]: time="2025-01-13T21:12:44.076942770Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:44.077606 containerd[1465]: time="2025-01-13T21:12:44.077588141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:44.077715 containerd[1465]: time="2025-01-13T21:12:44.077696043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:44.077833 containerd[1465]: time="2025-01-13T21:12:44.077816820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:44.078220 containerd[1465]: time="2025-01-13T21:12:44.078201311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:44.079510 containerd[1465]: time="2025-01-13T21:12:44.079488545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:44.080242 containerd[1465]: time="2025-01-13T21:12:44.080220698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:44.080335 containerd[1465]: time="2025-01-13T21:12:44.080318562Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:12:44.080550 containerd[1465]: time="2025-01-13T21:12:44.080530760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:12:44.080966 containerd[1465]: time="2025-01-13T21:12:44.080749229Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:12:44.089861 containerd[1465]: time="2025-01-13T21:12:44.089834128Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090125755Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090153938Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090175959Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090194253Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090351468Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090640560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090743654Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090762289Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090779170Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090796313Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090813345Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090827862Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090849082Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:12:44.092675 containerd[1465]: time="2025-01-13T21:12:44.090866685Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.090882955Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.090899636Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.090915446Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.090937257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.090952866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.090973345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.090990807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.091030512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.091047634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.091061009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.091076889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.091093560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.091113157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093041 containerd[1465]: time="2025-01-13T21:12:44.091126933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091147772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091162109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091179080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091204969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091222141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091235226Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091286141Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091307111Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091319614Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091333079Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091343960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091361342Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091373876Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:12:44.093345 containerd[1465]: time="2025-01-13T21:12:44.091387351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:12:44.094908 containerd[1465]: time="2025-01-13T21:12:44.094843413Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:12:44.096654 containerd[1465]: time="2025-01-13T21:12:44.095897630Z" level=info msg="Connect containerd service" Jan 13 21:12:44.096654 containerd[1465]: time="2025-01-13T21:12:44.095944528Z" level=info msg="using legacy CRI server" Jan 13 21:12:44.096654 containerd[1465]: time="2025-01-13T21:12:44.095956120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:12:44.096654 containerd[1465]: time="2025-01-13T21:12:44.096104338Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:12:44.098034 containerd[1465]: time="2025-01-13T21:12:44.097970438Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:12:44.098935 containerd[1465]: time="2025-01-13T21:12:44.098825050Z" level=info msg="Start subscribing containerd event" Jan 13 21:12:44.099033 containerd[1465]: time="2025-01-13T21:12:44.098955756Z" level=info msg="Start recovering state" Jan 13 21:12:44.099161 containerd[1465]: time="2025-01-13T21:12:44.099124352Z" level=info msg="Start event monitor" Jan 13 21:12:44.099215 containerd[1465]: time="2025-01-13T21:12:44.099160900Z" level=info msg="Start snapshots syncer" Jan 13 21:12:44.099215 containerd[1465]: time="2025-01-13T21:12:44.099181609Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:12:44.099215 containerd[1465]: time="2025-01-13T21:12:44.099196978Z" level=info msg="Start streaming server" Jan 13 21:12:44.100038 containerd[1465]: time="2025-01-13T21:12:44.099986549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:12:44.100162 containerd[1465]: time="2025-01-13T21:12:44.100144906Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:12:44.101228 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:12:44.105550 containerd[1465]: time="2025-01-13T21:12:44.105512984Z" level=info msg="containerd successfully booted in 0.117530s" Jan 13 21:12:44.141354 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:12:44.169571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:12:44.184294 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:12:44.192282 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:12:44.192466 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:12:44.205616 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:12:44.218407 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:12:44.233547 tar[1453]: linux-amd64/LICENSE Jan 13 21:12:44.233695 tar[1453]: linux-amd64/README.md Jan 13 21:12:44.233711 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:12:44.237976 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:12:44.238748 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:12:44.251291 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:12:44.517290 systemd-networkd[1378]: eth0: Gained IPv6LL Jan 13 21:12:44.522704 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:12:44.528484 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:12:44.539687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:12:44.553704 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:12:44.606645 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:12:46.489275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:12:46.491872 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:12:47.895299 kubelet[1546]: E0113 21:12:47.895231 1546 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:12:47.900871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:12:47.901289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:12:47.902100 systemd[1]: kubelet.service: Consumed 2.295s CPU time. Jan 13 21:12:48.642797 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:12:48.656065 systemd[1]: Started sshd@0-172.24.4.27:22-172.24.4.1:42868.service - OpenSSH per-connection server daemon (172.24.4.1:42868). Jan 13 21:12:49.295877 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:12:49.297924 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:12:49.311788 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:12:49.322738 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:12:49.330645 systemd-logind[1440]: New session 2 of user core. Jan 13 21:12:49.336936 systemd-logind[1440]: New session 1 of user core. Jan 13 21:12:49.347497 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:12:49.357128 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:12:49.361510 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:12:49.487185 systemd[1564]: Queued start job for default target default.target. Jan 13 21:12:49.493898 systemd[1564]: Created slice app.slice - User Application Slice. Jan 13 21:12:49.493990 systemd[1564]: Reached target paths.target - Paths. Jan 13 21:12:49.494153 systemd[1564]: Reached target timers.target - Timers. Jan 13 21:12:49.495411 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:12:49.522156 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:12:49.522270 systemd[1564]: Reached target sockets.target - Sockets. Jan 13 21:12:49.522286 systemd[1564]: Reached target basic.target - Basic System. Jan 13 21:12:49.522319 systemd[1564]: Reached target default.target - Main User Target. Jan 13 21:12:49.522346 systemd[1564]: Startup finished in 153ms. Jan 13 21:12:49.522933 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:12:49.533385 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:12:49.534979 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:12:49.718713 sshd[1556]: Accepted publickey for core from 172.24.4.1 port 42868 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:12:49.720432 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:49.729151 systemd-logind[1440]: New session 3 of user core. Jan 13 21:12:49.736396 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:12:50.342317 coreos-metadata[1430]: Jan 13 21:12:50.342 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:12:50.345669 systemd[1]: Started sshd@1-172.24.4.27:22-172.24.4.1:42870.service - OpenSSH per-connection server daemon (172.24.4.1:42870). Jan 13 21:12:50.389958 coreos-metadata[1430]: Jan 13 21:12:50.389 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 21:12:50.683075 coreos-metadata[1430]: Jan 13 21:12:50.682 INFO Fetch successful Jan 13 21:12:50.683075 coreos-metadata[1430]: Jan 13 21:12:50.682 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:12:50.695996 coreos-metadata[1430]: Jan 13 21:12:50.695 INFO Fetch successful Jan 13 21:12:50.695996 coreos-metadata[1430]: Jan 13 21:12:50.695 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 21:12:50.709273 coreos-metadata[1430]: Jan 13 21:12:50.709 INFO Fetch successful Jan 13 21:12:50.709273 coreos-metadata[1430]: Jan 13 21:12:50.709 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 21:12:50.719916 coreos-metadata[1430]: Jan 13 21:12:50.719 INFO Fetch successful Jan 13 21:12:50.719916 coreos-metadata[1430]: Jan 13 21:12:50.719 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 21:12:50.727417 coreos-metadata[1430]: Jan 13 21:12:50.727 INFO Fetch successful Jan 13 21:12:50.727417 coreos-metadata[1430]: Jan 13 21:12:50.727 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 21:12:50.737940 coreos-metadata[1430]: Jan 13 21:12:50.737 INFO Fetch successful Jan 13 21:12:50.780201 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:12:50.781958 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:12:50.841451 coreos-metadata[1493]: Jan 13 21:12:50.841 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:12:50.884253 coreos-metadata[1493]: Jan 13 21:12:50.884 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 21:12:50.900331 coreos-metadata[1493]: Jan 13 21:12:50.900 INFO Fetch successful Jan 13 21:12:50.900331 coreos-metadata[1493]: Jan 13 21:12:50.900 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:12:50.916771 coreos-metadata[1493]: Jan 13 21:12:50.916 INFO Fetch successful Jan 13 21:12:50.922235 unknown[1493]: wrote ssh authorized keys file for user: core Jan 13 21:12:50.959129 update-ssh-keys[1607]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:12:50.961636 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:12:50.966628 systemd[1]: Finished sshkeys.service. Jan 13 21:12:50.968878 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:12:50.969226 systemd[1]: Startup finished in 1.240s (kernel) + 15.740s (initrd) + 11.074s (userspace) = 28.056s. Jan 13 21:12:52.032709 sshd[1598]: Accepted publickey for core from 172.24.4.1 port 42870 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:12:52.035253 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:52.046102 systemd-logind[1440]: New session 4 of user core. Jan 13 21:12:52.053701 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:12:52.673735 sshd[1612]: Connection closed by 172.24.4.1 port 42870 Jan 13 21:12:52.675325 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:52.686497 systemd[1]: sshd@1-172.24.4.27:22-172.24.4.1:42870.service: Deactivated successfully. Jan 13 21:12:52.689697 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:12:52.692325 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:12:52.705114 systemd[1]: Started sshd@2-172.24.4.27:22-172.24.4.1:42872.service - OpenSSH per-connection server daemon (172.24.4.1:42872). Jan 13 21:12:52.708210 systemd-logind[1440]: Removed session 4. Jan 13 21:12:53.820525 sshd[1617]: Accepted publickey for core from 172.24.4.1 port 42872 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:12:53.823379 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:53.833332 systemd-logind[1440]: New session 5 of user core. Jan 13 21:12:53.845317 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:12:54.462850 sshd[1619]: Connection closed by 172.24.4.1 port 42872 Jan 13 21:12:54.465331 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:54.477140 systemd[1]: sshd@2-172.24.4.27:22-172.24.4.1:42872.service: Deactivated successfully. Jan 13 21:12:54.480903 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:12:54.482943 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:12:54.490560 systemd[1]: Started sshd@3-172.24.4.27:22-172.24.4.1:51766.service - OpenSSH per-connection server daemon (172.24.4.1:51766). Jan 13 21:12:54.493806 systemd-logind[1440]: Removed session 5. Jan 13 21:12:55.607059 sshd[1624]: Accepted publickey for core from 172.24.4.1 port 51766 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:12:55.609543 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:55.619837 systemd-logind[1440]: New session 6 of user core. Jan 13 21:12:55.631305 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:12:56.250639 sshd[1626]: Connection closed by 172.24.4.1 port 51766 Jan 13 21:12:56.251367 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:56.261150 systemd[1]: sshd@3-172.24.4.27:22-172.24.4.1:51766.service: Deactivated successfully. Jan 13 21:12:56.263986 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:12:56.267385 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:12:56.277658 systemd[1]: Started sshd@4-172.24.4.27:22-172.24.4.1:51778.service - OpenSSH per-connection server daemon (172.24.4.1:51778). Jan 13 21:12:56.281217 systemd-logind[1440]: Removed session 6. Jan 13 21:12:57.434484 sshd[1631]: Accepted publickey for core from 172.24.4.1 port 51778 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:12:57.437113 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:57.447374 systemd-logind[1440]: New session 7 of user core. Jan 13 21:12:57.455309 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:12:57.931307 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:12:57.932682 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:12:57.934448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:12:57.947869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:12:57.964296 sudo[1634]: pam_unix(sudo:session): session closed for user root Jan 13 21:12:58.174376 sshd[1633]: Connection closed by 172.24.4.1 port 51778 Jan 13 21:12:58.173983 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:58.190523 systemd[1]: sshd@4-172.24.4.27:22-172.24.4.1:51778.service: Deactivated successfully. Jan 13 21:12:58.200608 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:12:58.202301 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:12:58.215548 systemd[1]: Started sshd@5-172.24.4.27:22-172.24.4.1:51790.service - OpenSSH per-connection server daemon (172.24.4.1:51790). Jan 13 21:12:58.217212 systemd-logind[1440]: Removed session 7. Jan 13 21:12:58.235654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:12:58.239944 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:12:58.306607 kubelet[1648]: E0113 21:12:58.306497 1648 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:12:58.312624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:12:58.312895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:12:59.377204 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 51790 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:12:59.379810 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:59.390967 systemd-logind[1440]: New session 8 of user core. Jan 13 21:12:59.399317 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:12:59.851632 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:12:59.852346 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:12:59.860316 sudo[1660]: pam_unix(sudo:session): session closed for user root Jan 13 21:12:59.871962 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 21:12:59.873245 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:12:59.901761 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 21:12:59.961098 augenrules[1682]: No rules Jan 13 21:12:59.962366 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:12:59.962739 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 21:12:59.965521 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 13 21:13:00.116703 sshd[1658]: Connection closed by 172.24.4.1 port 51790 Jan 13 21:13:00.117647 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:00.130818 systemd[1]: sshd@5-172.24.4.27:22-172.24.4.1:51790.service: Deactivated successfully. Jan 13 21:13:00.134532 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:13:00.140161 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:13:00.146538 systemd[1]: Started sshd@6-172.24.4.27:22-172.24.4.1:51792.service - OpenSSH per-connection server daemon (172.24.4.1:51792). Jan 13 21:13:00.149391 systemd-logind[1440]: Removed session 8. Jan 13 21:13:01.264306 sshd[1690]: Accepted publickey for core from 172.24.4.1 port 51792 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:13:01.266966 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:01.276141 systemd-logind[1440]: New session 9 of user core. Jan 13 21:13:01.288271 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:13:01.737500 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:13:01.738905 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:13:02.399382 (dockerd)[1711]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:13:02.400428 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:13:02.942384 dockerd[1711]: time="2025-01-13T21:13:02.942299111Z" level=info msg="Starting up" Jan 13 21:13:03.098826 dockerd[1711]: time="2025-01-13T21:13:03.098607788Z" level=info msg="Loading containers: start." Jan 13 21:13:03.292137 kernel: Initializing XFRM netlink socket Jan 13 21:13:03.393737 systemd-networkd[1378]: docker0: Link UP Jan 13 21:13:03.429899 dockerd[1711]: time="2025-01-13T21:13:03.429808826Z" level=info msg="Loading containers: done." Jan 13 21:13:03.460904 dockerd[1711]: time="2025-01-13T21:13:03.460813074Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:13:03.461148 dockerd[1711]: time="2025-01-13T21:13:03.461036443Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 21:13:03.461293 dockerd[1711]: time="2025-01-13T21:13:03.461239734Z" level=info msg="Daemon has completed initialization" Jan 13 21:13:03.535623 dockerd[1711]: time="2025-01-13T21:13:03.534521765Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:13:03.535413 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:13:05.819712 containerd[1465]: time="2025-01-13T21:13:05.819658498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:13:06.651740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215925482.mount: Deactivated successfully. Jan 13 21:13:08.575036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:13:08.587626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:08.685192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:08.687933 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:13:08.737129 kubelet[1968]: E0113 21:13:08.736965 1968 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:13:08.739406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:13:08.739559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:13:08.963305 containerd[1465]: time="2025-01-13T21:13:08.963113337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:08.965958 containerd[1465]: time="2025-01-13T21:13:08.965855159Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Jan 13 21:13:08.970054 containerd[1465]: time="2025-01-13T21:13:08.968259338Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:08.982702 containerd[1465]: time="2025-01-13T21:13:08.982626013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:08.985756 containerd[1465]: time="2025-01-13T21:13:08.985678157Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.165950549s" Jan 13 21:13:08.985874 containerd[1465]: time="2025-01-13T21:13:08.985756855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 21:13:09.037286 containerd[1465]: time="2025-01-13T21:13:09.037189356Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:13:11.448594 containerd[1465]: time="2025-01-13T21:13:11.448331242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:11.449916 containerd[1465]: time="2025-01-13T21:13:11.449676185Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Jan 13 21:13:11.451496 containerd[1465]: time="2025-01-13T21:13:11.451438490Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:11.454900 containerd[1465]: time="2025-01-13T21:13:11.454856781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:11.456255 containerd[1465]: time="2025-01-13T21:13:11.456126082Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.418867755s" Jan 13 21:13:11.456255 containerd[1465]: time="2025-01-13T21:13:11.456159564Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 21:13:11.481451 containerd[1465]: time="2025-01-13T21:13:11.481414871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:13:13.168559 containerd[1465]: time="2025-01-13T21:13:13.168503645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:13.170051 containerd[1465]: time="2025-01-13T21:13:13.169989001Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Jan 13 21:13:13.172145 containerd[1465]: time="2025-01-13T21:13:13.172071397Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:13.175719 containerd[1465]: time="2025-01-13T21:13:13.175675737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:13.178180 containerd[1465]: time="2025-01-13T21:13:13.177355076Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.695752554s" Jan 13 21:13:13.178180 containerd[1465]: time="2025-01-13T21:13:13.177389521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 21:13:13.200447 containerd[1465]: time="2025-01-13T21:13:13.200211654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:13:14.809123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184490131.mount: Deactivated successfully. Jan 13 21:13:15.308674 containerd[1465]: time="2025-01-13T21:13:15.308506650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:15.310110 containerd[1465]: time="2025-01-13T21:13:15.310027763Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 13 21:13:15.312114 containerd[1465]: time="2025-01-13T21:13:15.312056979Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:15.314844 containerd[1465]: time="2025-01-13T21:13:15.314762774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:15.316130 containerd[1465]: time="2025-01-13T21:13:15.315479257Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.115232087s" Jan 13 21:13:15.316130 containerd[1465]: time="2025-01-13T21:13:15.315512440Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:13:15.342228 containerd[1465]: time="2025-01-13T21:13:15.342187619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:13:16.011971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679001130.mount: Deactivated successfully. Jan 13 21:13:17.496863 containerd[1465]: time="2025-01-13T21:13:17.496806492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:17.497936 containerd[1465]: time="2025-01-13T21:13:17.497890069Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 21:13:17.499994 containerd[1465]: time="2025-01-13T21:13:17.499939441Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:17.503362 containerd[1465]: time="2025-01-13T21:13:17.503302154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:17.508020 containerd[1465]: time="2025-01-13T21:13:17.506877248Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.164648361s" Jan 13 21:13:17.508020 containerd[1465]: time="2025-01-13T21:13:17.506936359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:13:17.532453 containerd[1465]: time="2025-01-13T21:13:17.532405895Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:13:18.084475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577628377.mount: Deactivated successfully. Jan 13 21:13:18.094638 containerd[1465]: time="2025-01-13T21:13:18.094376745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:18.096752 containerd[1465]: time="2025-01-13T21:13:18.096631092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 21:13:18.098537 containerd[1465]: time="2025-01-13T21:13:18.098375485Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:18.105236 containerd[1465]: time="2025-01-13T21:13:18.105068334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:18.108266 containerd[1465]: time="2025-01-13T21:13:18.107355091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 574.899353ms" Jan 13 21:13:18.108266 containerd[1465]: time="2025-01-13T21:13:18.107425484Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:13:18.158361 containerd[1465]: time="2025-01-13T21:13:18.158302194Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:13:18.791319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:13:18.803190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:18.817035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547821421.mount: Deactivated successfully. Jan 13 21:13:19.079148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:19.093246 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:13:19.252649 kubelet[2078]: E0113 21:13:19.252515 2078 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:13:19.258656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:13:19.259051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:13:22.014083 containerd[1465]: time="2025-01-13T21:13:22.013889704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:22.092892 containerd[1465]: time="2025-01-13T21:13:22.092734506Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 13 21:13:22.097053 containerd[1465]: time="2025-01-13T21:13:22.096896473Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:22.116076 containerd[1465]: time="2025-01-13T21:13:22.114504720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:22.117585 containerd[1465]: time="2025-01-13T21:13:22.117518222Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.959162147s" Jan 13 21:13:22.117800 containerd[1465]: time="2025-01-13T21:13:22.117755579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 21:13:26.318288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:26.333532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:26.365295 systemd[1]: Reloading requested from client PID 2191 ('systemctl') (unit session-9.scope)... Jan 13 21:13:26.365437 systemd[1]: Reloading... Jan 13 21:13:26.472065 zram_generator::config[2230]: No configuration found. Jan 13 21:13:26.620262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:13:26.701349 systemd[1]: Reloading finished in 335 ms. Jan 13 21:13:26.763319 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:13:26.763504 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:13:26.763993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:26.767387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:26.871164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:26.877309 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:13:26.933361 kubelet[2298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:13:27.092198 kubelet[2298]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:13:27.092198 kubelet[2298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:13:27.092198 kubelet[2298]: I0113 21:13:27.088673 2298 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:13:28.163685 kubelet[2298]: I0113 21:13:28.163652 2298 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:13:28.164073 kubelet[2298]: I0113 21:13:28.164061 2298 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:13:28.164341 kubelet[2298]: I0113 21:13:28.164327 2298 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:13:28.537448 kubelet[2298]: E0113 21:13:28.537288 2298 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.538097 kubelet[2298]: I0113 21:13:28.538058 2298 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:13:28.568038 kubelet[2298]: I0113 21:13:28.567953 2298 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:13:28.569059 kubelet[2298]: I0113 21:13:28.568723 2298 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:13:28.569429 kubelet[2298]: I0113 21:13:28.568794 2298 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-a-cb16eea878.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:13:28.570063 kubelet[2298]: I0113 21:13:28.569726 2298 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:13:28.570063 kubelet[2298]: I0113 21:13:28.569765 2298 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:13:28.570277 kubelet[2298]: I0113 21:13:28.570250 2298 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:13:28.573069 kubelet[2298]: I0113 21:13:28.572565 2298 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:13:28.573232 kubelet[2298]: I0113 21:13:28.573206 2298 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:13:28.573402 kubelet[2298]: I0113 21:13:28.573381 2298 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:13:28.573848 kubelet[2298]: I0113 21:13:28.573783 2298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:13:28.581785 kubelet[2298]: W0113 21:13:28.581435 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-a-cb16eea878.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.581785 kubelet[2298]: E0113 21:13:28.581584 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-a-cb16eea878.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.584695 kubelet[2298]: W0113 21:13:28.584250 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.584695 kubelet[2298]: E0113 21:13:28.584338 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.585145 kubelet[2298]: I0113 21:13:28.585101 2298 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:13:28.590074 kubelet[2298]: I0113 21:13:28.588645 2298 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:13:28.590074 kubelet[2298]: W0113 21:13:28.588744 2298 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:13:28.590074 kubelet[2298]: I0113 21:13:28.589940 2298 server.go:1264] "Started kubelet" Jan 13 21:13:28.598553 kubelet[2298]: I0113 21:13:28.597197 2298 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:13:28.599487 kubelet[2298]: I0113 21:13:28.599413 2298 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:13:28.601631 kubelet[2298]: I0113 21:13:28.601509 2298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:13:28.602965 kubelet[2298]: I0113 21:13:28.602216 2298 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:13:28.605086 kubelet[2298]: E0113 21:13:28.604587 2298 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.27:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.27:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-a-cb16eea878.novalocal.181a5cf3d2e6cf2d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-a-cb16eea878.novalocal,UID:ci-4152-2-0-a-cb16eea878.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-a-cb16eea878.novalocal,},FirstTimestamp:2025-01-13 21:13:28.589897517 +0000 UTC m=+1.708414764,LastTimestamp:2025-01-13 21:13:28.589897517 +0000 UTC m=+1.708414764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-a-cb16eea878.novalocal,}" Jan 13 21:13:28.606585 kubelet[2298]: I0113 21:13:28.606526 2298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:13:28.616151 kubelet[2298]: E0113 21:13:28.616093 2298 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:13:28.616726 kubelet[2298]: E0113 21:13:28.616250 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-a-cb16eea878.novalocal\" not found" Jan 13 21:13:28.616726 kubelet[2298]: I0113 21:13:28.616315 2298 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:13:28.616726 kubelet[2298]: I0113 21:13:28.616491 2298 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:13:28.616726 kubelet[2298]: I0113 21:13:28.616578 2298 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:13:28.618665 kubelet[2298]: E0113 21:13:28.618328 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-a-cb16eea878.novalocal?timeout=10s\": dial tcp 172.24.4.27:6443: connect: connection refused" interval="200ms" Jan 13 21:13:28.618665 kubelet[2298]: W0113 21:13:28.618511 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.618665 kubelet[2298]: E0113 21:13:28.618605 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.619908 kubelet[2298]: I0113 21:13:28.619392 2298 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:13:28.626224 kubelet[2298]: I0113 21:13:28.622959 2298 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:13:28.626224 kubelet[2298]: I0113 21:13:28.622995 2298 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:13:28.632246 kubelet[2298]: I0113 21:13:28.632109 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:13:28.647735 kubelet[2298]: I0113 21:13:28.647690 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:13:28.647941 kubelet[2298]: I0113 21:13:28.647919 2298 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:13:28.648550 kubelet[2298]: I0113 21:13:28.648511 2298 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:13:28.648663 kubelet[2298]: E0113 21:13:28.648563 2298 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:13:28.649298 kubelet[2298]: W0113 21:13:28.649246 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.649298 kubelet[2298]: E0113 21:13:28.649294 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:28.664178 kubelet[2298]: I0113 21:13:28.664087 2298 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:13:28.664178 kubelet[2298]: I0113 21:13:28.664171 2298 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:13:28.664283 kubelet[2298]: I0113 21:13:28.664197 2298 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:13:28.669863 kubelet[2298]: I0113 21:13:28.669818 2298 policy_none.go:49] "None policy: Start" Jan 13 21:13:28.670713 kubelet[2298]: I0113 21:13:28.670651 2298 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:13:28.671048 kubelet[2298]: I0113 21:13:28.670787 2298 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:13:28.677468 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:13:28.686463 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:13:28.690218 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:13:28.701253 kubelet[2298]: I0113 21:13:28.700774 2298 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:13:28.701253 kubelet[2298]: I0113 21:13:28.700939 2298 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:13:28.701253 kubelet[2298]: I0113 21:13:28.701058 2298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:13:28.702881 kubelet[2298]: E0113 21:13:28.702857 2298 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-a-cb16eea878.novalocal\" not found" Jan 13 21:13:28.718757 kubelet[2298]: I0113 21:13:28.718730 2298 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.719296 kubelet[2298]: E0113 21:13:28.719257 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.27:6443/api/v1/nodes\": dial tcp 172.24.4.27:6443: connect: connection refused" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.749633 kubelet[2298]: I0113 21:13:28.749563 2298 topology_manager.go:215] "Topology Admit Handler" podUID="ad83d2cf6d4d79ea1eb45518e8ee03be" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.753100 kubelet[2298]: I0113 21:13:28.752665 2298 topology_manager.go:215] "Topology Admit Handler" podUID="e907ac7cc46e19b8e473823a538bc541" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.757148 kubelet[2298]: I0113 21:13:28.756595 2298 topology_manager.go:215] "Topology Admit Handler" podUID="5df713a98afb103f3c31dfa802fe9b37" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.772295 systemd[1]: Created slice kubepods-burstable-podad83d2cf6d4d79ea1eb45518e8ee03be.slice - libcontainer container kubepods-burstable-podad83d2cf6d4d79ea1eb45518e8ee03be.slice. Jan 13 21:13:28.796702 systemd[1]: Created slice kubepods-burstable-pod5df713a98afb103f3c31dfa802fe9b37.slice - libcontainer container kubepods-burstable-pod5df713a98afb103f3c31dfa802fe9b37.slice. Jan 13 21:13:28.813659 systemd[1]: Created slice kubepods-burstable-pode907ac7cc46e19b8e473823a538bc541.slice - libcontainer container kubepods-burstable-pode907ac7cc46e19b8e473823a538bc541.slice. Jan 13 21:13:28.818247 kubelet[2298]: I0113 21:13:28.817619 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df713a98afb103f3c31dfa802fe9b37-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"5df713a98afb103f3c31dfa802fe9b37\") " pod="kube-system/kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.818247 kubelet[2298]: I0113 21:13:28.817696 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad83d2cf6d4d79ea1eb45518e8ee03be-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"ad83d2cf6d4d79ea1eb45518e8ee03be\") " pod="kube-system/kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.818247 kubelet[2298]: I0113 21:13:28.817747 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.818247 kubelet[2298]: I0113 21:13:28.817794 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.818247 kubelet[2298]: I0113 21:13:28.817841 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.818645 kubelet[2298]: I0113 21:13:28.817885 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad83d2cf6d4d79ea1eb45518e8ee03be-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"ad83d2cf6d4d79ea1eb45518e8ee03be\") " pod="kube-system/kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.818645 kubelet[2298]: I0113 21:13:28.817928 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad83d2cf6d4d79ea1eb45518e8ee03be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"ad83d2cf6d4d79ea1eb45518e8ee03be\") " pod="kube-system/kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.818645 kubelet[2298]: I0113 21:13:28.817971 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.818645 kubelet[2298]: I0113 21:13:28.818070 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.819621 kubelet[2298]: E0113 21:13:28.819337 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-a-cb16eea878.novalocal?timeout=10s\": dial tcp 172.24.4.27:6443: connect: connection refused" interval="400ms" Jan 13 21:13:28.915109 update_engine[1442]: I20250113 21:13:28.914049 1442 update_attempter.cc:509] Updating boot flags... Jan 13 21:13:28.923480 kubelet[2298]: I0113 21:13:28.922687 2298 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.923480 kubelet[2298]: E0113 21:13:28.923333 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.27:6443/api/v1/nodes\": dial tcp 172.24.4.27:6443: connect: connection refused" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:28.961103 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2335) Jan 13 21:13:29.088581 containerd[1465]: time="2025-01-13T21:13:29.088153130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal,Uid:ad83d2cf6d4d79ea1eb45518e8ee03be,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:29.111877 containerd[1465]: time="2025-01-13T21:13:29.111746618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal,Uid:5df713a98afb103f3c31dfa802fe9b37,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:29.122158 containerd[1465]: time="2025-01-13T21:13:29.121984242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal,Uid:e907ac7cc46e19b8e473823a538bc541,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:29.220613 kubelet[2298]: E0113 21:13:29.220477 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-a-cb16eea878.novalocal?timeout=10s\": dial tcp 172.24.4.27:6443: connect: connection refused" interval="800ms" Jan 13 21:13:29.326649 kubelet[2298]: I0113 21:13:29.326599 2298 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:29.327873 kubelet[2298]: E0113 21:13:29.327791 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.27:6443/api/v1/nodes\": dial tcp 172.24.4.27:6443: connect: connection refused" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:29.546797 kubelet[2298]: W0113 21:13:29.546530 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:29.546797 kubelet[2298]: E0113 21:13:29.546665 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:29.548961 kubelet[2298]: W0113 21:13:29.548818 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:29.548961 kubelet[2298]: E0113 21:13:29.548959 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:29.587162 kubelet[2298]: W0113 21:13:29.587055 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-a-cb16eea878.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:29.587162 kubelet[2298]: E0113 21:13:29.587162 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-a-cb16eea878.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:29.663765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999895125.mount: Deactivated successfully. Jan 13 21:13:29.676119 containerd[1465]: time="2025-01-13T21:13:29.675938676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:29.682868 containerd[1465]: time="2025-01-13T21:13:29.682744674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 21:13:29.684250 containerd[1465]: time="2025-01-13T21:13:29.684159827Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:29.686865 containerd[1465]: time="2025-01-13T21:13:29.686791309Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:29.691266 containerd[1465]: time="2025-01-13T21:13:29.691132107Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:29.693141 containerd[1465]: time="2025-01-13T21:13:29.693053793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:13:29.695075 containerd[1465]: time="2025-01-13T21:13:29.694964038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:13:29.696656 containerd[1465]: time="2025-01-13T21:13:29.696560913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:29.700044 containerd[1465]: time="2025-01-13T21:13:29.698870570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.567949ms" Jan 13 21:13:29.713737 containerd[1465]: time="2025-01-13T21:13:29.713651584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.343763ms" Jan 13 21:13:29.741938 containerd[1465]: time="2025-01-13T21:13:29.741879523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.915016ms" Jan 13 21:13:29.843444 kubelet[2298]: W0113 21:13:29.841357 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:29.843444 kubelet[2298]: E0113 21:13:29.841396 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.27:6443: connect: connection refused Jan 13 21:13:29.884211 containerd[1465]: time="2025-01-13T21:13:29.884123926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:29.884339 containerd[1465]: time="2025-01-13T21:13:29.884242289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:29.884339 containerd[1465]: time="2025-01-13T21:13:29.884265543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:29.885660 containerd[1465]: time="2025-01-13T21:13:29.884433218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:29.887549 containerd[1465]: time="2025-01-13T21:13:29.887435378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:29.887685 containerd[1465]: time="2025-01-13T21:13:29.887519967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:29.887685 containerd[1465]: time="2025-01-13T21:13:29.887657416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:29.887900 containerd[1465]: time="2025-01-13T21:13:29.887835160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:29.889501 containerd[1465]: time="2025-01-13T21:13:29.888775520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:29.889501 containerd[1465]: time="2025-01-13T21:13:29.888837627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:29.889501 containerd[1465]: time="2025-01-13T21:13:29.888859097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:29.889501 containerd[1465]: time="2025-01-13T21:13:29.888944137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:29.920207 systemd[1]: Started cri-containerd-242feb58a4aca8c2844758d58b7042e73696fa842b6a906b33a8dfbdc0d2c765.scope - libcontainer container 242feb58a4aca8c2844758d58b7042e73696fa842b6a906b33a8dfbdc0d2c765. Jan 13 21:13:29.922350 systemd[1]: Started cri-containerd-344ff91c60928b8518e34ff9073615e2bfe095c595959ffff6f4ae2c9a57932b.scope - libcontainer container 344ff91c60928b8518e34ff9073615e2bfe095c595959ffff6f4ae2c9a57932b. Jan 13 21:13:29.932889 systemd[1]: Started cri-containerd-ee6016e7b32eaabb71c3c0c42ab3a204ab6b397c920b7e15ea24c2121d6798cc.scope - libcontainer container ee6016e7b32eaabb71c3c0c42ab3a204ab6b397c920b7e15ea24c2121d6798cc. Jan 13 21:13:29.981285 containerd[1465]: time="2025-01-13T21:13:29.981242373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal,Uid:5df713a98afb103f3c31dfa802fe9b37,Namespace:kube-system,Attempt:0,} returns sandbox id \"242feb58a4aca8c2844758d58b7042e73696fa842b6a906b33a8dfbdc0d2c765\"" Jan 13 21:13:29.988326 containerd[1465]: time="2025-01-13T21:13:29.988291768Z" level=info msg="CreateContainer within sandbox \"242feb58a4aca8c2844758d58b7042e73696fa842b6a906b33a8dfbdc0d2c765\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:13:30.010045 containerd[1465]: time="2025-01-13T21:13:30.009920194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal,Uid:ad83d2cf6d4d79ea1eb45518e8ee03be,Namespace:kube-system,Attempt:0,} returns sandbox id \"344ff91c60928b8518e34ff9073615e2bfe095c595959ffff6f4ae2c9a57932b\"" Jan 13 21:13:30.013679 containerd[1465]: time="2025-01-13T21:13:30.013627730Z" level=info msg="CreateContainer within sandbox \"344ff91c60928b8518e34ff9073615e2bfe095c595959ffff6f4ae2c9a57932b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:13:30.019449 containerd[1465]: time="2025-01-13T21:13:30.019376896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal,Uid:e907ac7cc46e19b8e473823a538bc541,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee6016e7b32eaabb71c3c0c42ab3a204ab6b397c920b7e15ea24c2121d6798cc\"" Jan 13 21:13:30.021375 kubelet[2298]: E0113 21:13:30.021324 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-a-cb16eea878.novalocal?timeout=10s\": dial tcp 172.24.4.27:6443: connect: connection refused" interval="1.6s" Jan 13 21:13:30.022933 containerd[1465]: time="2025-01-13T21:13:30.022898682Z" level=info msg="CreateContainer within sandbox \"ee6016e7b32eaabb71c3c0c42ab3a204ab6b397c920b7e15ea24c2121d6798cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:13:30.068929 containerd[1465]: time="2025-01-13T21:13:30.068855585Z" level=info msg="CreateContainer within sandbox \"242feb58a4aca8c2844758d58b7042e73696fa842b6a906b33a8dfbdc0d2c765\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56dc262a688fc1c17a61ba039dc3a976239ded366a63b7f82c780e1b967d9bad\"" Jan 13 21:13:30.069499 containerd[1465]: time="2025-01-13T21:13:30.069457668Z" level=info msg="StartContainer for \"56dc262a688fc1c17a61ba039dc3a976239ded366a63b7f82c780e1b967d9bad\"" Jan 13 21:13:30.078759 containerd[1465]: time="2025-01-13T21:13:30.078190077Z" level=info msg="CreateContainer within sandbox \"344ff91c60928b8518e34ff9073615e2bfe095c595959ffff6f4ae2c9a57932b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e3d35c8f28a2f8b23b7556c6dce45b138ed249d3298558a78b94d9758fa203d4\"" Jan 13 21:13:30.081019 containerd[1465]: time="2025-01-13T21:13:30.080071006Z" level=info msg="StartContainer for \"e3d35c8f28a2f8b23b7556c6dce45b138ed249d3298558a78b94d9758fa203d4\"" Jan 13 21:13:30.100688 containerd[1465]: time="2025-01-13T21:13:30.100587405Z" level=info msg="CreateContainer within sandbox \"ee6016e7b32eaabb71c3c0c42ab3a204ab6b397c920b7e15ea24c2121d6798cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c4a389873e831aee99c28a75b1063fe5be5a78e26198465dac7a4119dc8ed4f\"" Jan 13 21:13:30.102111 containerd[1465]: time="2025-01-13T21:13:30.102091885Z" level=info msg="StartContainer for \"7c4a389873e831aee99c28a75b1063fe5be5a78e26198465dac7a4119dc8ed4f\"" Jan 13 21:13:30.116230 systemd[1]: Started cri-containerd-56dc262a688fc1c17a61ba039dc3a976239ded366a63b7f82c780e1b967d9bad.scope - libcontainer container 56dc262a688fc1c17a61ba039dc3a976239ded366a63b7f82c780e1b967d9bad. Jan 13 21:13:30.127364 systemd[1]: Started cri-containerd-e3d35c8f28a2f8b23b7556c6dce45b138ed249d3298558a78b94d9758fa203d4.scope - libcontainer container e3d35c8f28a2f8b23b7556c6dce45b138ed249d3298558a78b94d9758fa203d4. Jan 13 21:13:30.138448 kubelet[2298]: I0113 21:13:30.138413 2298 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:30.138815 kubelet[2298]: E0113 21:13:30.138785 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.27:6443/api/v1/nodes\": dial tcp 172.24.4.27:6443: connect: connection refused" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:30.170143 systemd[1]: Started cri-containerd-7c4a389873e831aee99c28a75b1063fe5be5a78e26198465dac7a4119dc8ed4f.scope - libcontainer container 7c4a389873e831aee99c28a75b1063fe5be5a78e26198465dac7a4119dc8ed4f. Jan 13 21:13:30.216226 containerd[1465]: time="2025-01-13T21:13:30.216182355Z" level=info msg="StartContainer for \"56dc262a688fc1c17a61ba039dc3a976239ded366a63b7f82c780e1b967d9bad\" returns successfully" Jan 13 21:13:30.216962 containerd[1465]: time="2025-01-13T21:13:30.216377532Z" level=info msg="StartContainer for \"e3d35c8f28a2f8b23b7556c6dce45b138ed249d3298558a78b94d9758fa203d4\" returns successfully" Jan 13 21:13:30.253556 containerd[1465]: time="2025-01-13T21:13:30.253421216Z" level=info msg="StartContainer for \"7c4a389873e831aee99c28a75b1063fe5be5a78e26198465dac7a4119dc8ed4f\" returns successfully" Jan 13 21:13:31.740422 kubelet[2298]: I0113 21:13:31.740378 2298 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:32.253243 kubelet[2298]: E0113 21:13:32.253202 2298 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-a-cb16eea878.novalocal\" not found" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:32.431773 kubelet[2298]: I0113 21:13:32.431732 2298 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:32.582074 kubelet[2298]: I0113 21:13:32.582051 2298 apiserver.go:52] "Watching apiserver" Jan 13 21:13:32.616877 kubelet[2298]: I0113 21:13:32.616828 2298 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:13:32.886499 kubelet[2298]: E0113 21:13:32.886304 2298 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:35.124447 systemd[1]: Reloading requested from client PID 2594 ('systemctl') (unit session-9.scope)... Jan 13 21:13:35.125080 systemd[1]: Reloading... Jan 13 21:13:35.256032 zram_generator::config[2636]: No configuration found. Jan 13 21:13:35.395176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:13:35.494517 systemd[1]: Reloading finished in 368 ms. Jan 13 21:13:35.531318 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:35.542847 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:13:35.543074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:35.543117 systemd[1]: kubelet.service: Consumed 1.771s CPU time, 114.0M memory peak, 0B memory swap peak. Jan 13 21:13:35.549329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:35.740362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:35.753078 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:13:35.821229 kubelet[2697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:13:35.821608 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:13:35.821683 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:13:35.821871 kubelet[2697]: I0113 21:13:35.821838 2697 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:13:35.827085 kubelet[2697]: I0113 21:13:35.827055 2697 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:13:35.827236 kubelet[2697]: I0113 21:13:35.827225 2697 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:13:35.827589 kubelet[2697]: I0113 21:13:35.827573 2697 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:13:35.829249 kubelet[2697]: I0113 21:13:35.829232 2697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:13:35.830591 kubelet[2697]: I0113 21:13:35.830552 2697 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:13:35.842835 kubelet[2697]: I0113 21:13:35.842807 2697 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:13:35.843246 kubelet[2697]: I0113 21:13:35.843206 2697 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:13:35.843778 kubelet[2697]: I0113 21:13:35.843311 2697 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-a-cb16eea878.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:13:35.844281 kubelet[2697]: I0113 21:13:35.844264 2697 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:13:35.844361 kubelet[2697]: I0113 21:13:35.844353 2697 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:13:35.844470 kubelet[2697]: I0113 21:13:35.844459 2697 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:13:35.845824 kubelet[2697]: I0113 21:13:35.844653 2697 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:13:35.845824 kubelet[2697]: I0113 21:13:35.844672 2697 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:13:35.845824 kubelet[2697]: I0113 21:13:35.844696 2697 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:13:35.845824 kubelet[2697]: I0113 21:13:35.844729 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:13:35.846524 kubelet[2697]: I0113 21:13:35.846363 2697 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 21:13:35.846659 kubelet[2697]: I0113 21:13:35.846647 2697 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:13:35.848019 kubelet[2697]: I0113 21:13:35.847243 2697 server.go:1264] "Started kubelet" Jan 13 21:13:35.849101 kubelet[2697]: I0113 21:13:35.849084 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:13:35.858951 kubelet[2697]: I0113 21:13:35.858911 2697 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:13:35.860178 kubelet[2697]: I0113 21:13:35.860163 2697 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:13:35.861124 kubelet[2697]: I0113 21:13:35.861082 2697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:13:35.861403 kubelet[2697]: I0113 21:13:35.861388 2697 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:13:35.863902 kubelet[2697]: I0113 21:13:35.863860 2697 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:13:35.865685 kubelet[2697]: I0113 21:13:35.865666 2697 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:13:35.865907 kubelet[2697]: I0113 21:13:35.865891 2697 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:13:35.867768 kubelet[2697]: I0113 21:13:35.867740 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:13:35.868832 kubelet[2697]: I0113 21:13:35.868815 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:13:35.868922 kubelet[2697]: I0113 21:13:35.868911 2697 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:13:35.869036 kubelet[2697]: I0113 21:13:35.869023 2697 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:13:35.869159 kubelet[2697]: E0113 21:13:35.869128 2697 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:13:35.891073 kubelet[2697]: I0113 21:13:35.889173 2697 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:13:35.891073 kubelet[2697]: I0113 21:13:35.889194 2697 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:13:35.891073 kubelet[2697]: I0113 21:13:35.889278 2697 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:13:35.957749 kubelet[2697]: I0113 21:13:35.957683 2697 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:13:35.957749 kubelet[2697]: I0113 21:13:35.957741 2697 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:13:35.957749 kubelet[2697]: I0113 21:13:35.957758 2697 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:13:35.957934 kubelet[2697]: I0113 21:13:35.957907 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:13:35.957934 kubelet[2697]: I0113 21:13:35.957919 2697 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:13:35.958045 kubelet[2697]: I0113 21:13:35.957937 2697 policy_none.go:49] "None policy: Start" Jan 13 21:13:35.958480 kubelet[2697]: I0113 21:13:35.958457 2697 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:13:35.958480 kubelet[2697]: I0113 21:13:35.958479 2697 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:13:35.958615 kubelet[2697]: I0113 21:13:35.958595 2697 state_mem.go:75] "Updated machine memory state" Jan 13 21:13:35.963454 kubelet[2697]: I0113 21:13:35.963410 2697 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:13:35.963663 kubelet[2697]: I0113 21:13:35.963600 2697 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:13:35.963711 kubelet[2697]: I0113 21:13:35.963696 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:13:35.970635 kubelet[2697]: I0113 21:13:35.970331 2697 topology_manager.go:215] "Topology Admit Handler" podUID="e907ac7cc46e19b8e473823a538bc541" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:35.970635 kubelet[2697]: I0113 21:13:35.970430 2697 topology_manager.go:215] "Topology Admit Handler" podUID="5df713a98afb103f3c31dfa802fe9b37" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:35.970635 kubelet[2697]: I0113 21:13:35.970485 2697 topology_manager.go:215] "Topology Admit Handler" podUID="ad83d2cf6d4d79ea1eb45518e8ee03be" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:35.975054 kubelet[2697]: I0113 21:13:35.972958 2697 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:35.979408 kubelet[2697]: W0113 21:13:35.979380 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:13:35.987820 kubelet[2697]: W0113 21:13:35.987794 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:13:35.988631 kubelet[2697]: W0113 21:13:35.988562 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:13:35.993044 kubelet[2697]: I0113 21:13:35.992744 2697 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:35.994596 kubelet[2697]: I0113 21:13:35.993235 2697 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.069717 kubelet[2697]: I0113 21:13:36.069660 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.069717 kubelet[2697]: I0113 21:13:36.069706 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad83d2cf6d4d79ea1eb45518e8ee03be-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"ad83d2cf6d4d79ea1eb45518e8ee03be\") " pod="kube-system/kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.069886 kubelet[2697]: I0113 21:13:36.069730 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.069886 kubelet[2697]: I0113 21:13:36.069750 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df713a98afb103f3c31dfa802fe9b37-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"5df713a98afb103f3c31dfa802fe9b37\") " pod="kube-system/kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.069886 kubelet[2697]: I0113 21:13:36.069771 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad83d2cf6d4d79ea1eb45518e8ee03be-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"ad83d2cf6d4d79ea1eb45518e8ee03be\") " pod="kube-system/kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.069886 kubelet[2697]: I0113 21:13:36.069792 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad83d2cf6d4d79ea1eb45518e8ee03be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"ad83d2cf6d4d79ea1eb45518e8ee03be\") " pod="kube-system/kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.069886 kubelet[2697]: I0113 21:13:36.069812 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.070074 kubelet[2697]: I0113 21:13:36.069830 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.070074 kubelet[2697]: I0113 21:13:36.069849 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e907ac7cc46e19b8e473823a538bc541-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal\" (UID: \"e907ac7cc46e19b8e473823a538bc541\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.083039 sudo[2730]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:13:36.083341 sudo[2730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:13:36.626531 sudo[2730]: pam_unix(sudo:session): session closed for user root Jan 13 21:13:36.845931 kubelet[2697]: I0113 21:13:36.845643 2697 apiserver.go:52] "Watching apiserver" Jan 13 21:13:36.866562 kubelet[2697]: I0113 21:13:36.866513 2697 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:13:36.941358 kubelet[2697]: W0113 21:13:36.940216 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:13:36.941358 kubelet[2697]: E0113 21:13:36.940272 2697 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" Jan 13 21:13:36.970727 kubelet[2697]: I0113 21:13:36.970678 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-a-cb16eea878.novalocal" podStartSLOduration=1.970649679 podStartE2EDuration="1.970649679s" podCreationTimestamp="2025-01-13 21:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:13:36.969602801 +0000 UTC m=+1.211139292" watchObservedRunningTime="2025-01-13 21:13:36.970649679 +0000 UTC m=+1.212186170" Jan 13 21:13:36.992801 kubelet[2697]: I0113 21:13:36.992653 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-a-cb16eea878.novalocal" podStartSLOduration=1.9926371139999999 podStartE2EDuration="1.992637114s" podCreationTimestamp="2025-01-13 21:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:13:36.980210479 +0000 UTC m=+1.221746970" watchObservedRunningTime="2025-01-13 21:13:36.992637114 +0000 UTC m=+1.234173605" Jan 13 21:13:38.919359 sudo[1693]: pam_unix(sudo:session): session closed for user root Jan 13 21:13:39.072192 sshd[1692]: Connection closed by 172.24.4.1 port 51792 Jan 13 21:13:39.073114 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:39.078746 systemd[1]: sshd@6-172.24.4.27:22-172.24.4.1:51792.service: Deactivated successfully. Jan 13 21:13:39.083230 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:13:39.083948 systemd[1]: session-9.scope: Consumed 7.605s CPU time, 189.8M memory peak, 0B memory swap peak. Jan 13 21:13:39.087108 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:13:39.089197 systemd-logind[1440]: Removed session 9. Jan 13 21:13:43.365649 kubelet[2697]: I0113 21:13:43.365404 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-a-cb16eea878.novalocal" podStartSLOduration=8.365374461 podStartE2EDuration="8.365374461s" podCreationTimestamp="2025-01-13 21:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:13:36.993451906 +0000 UTC m=+1.234988427" watchObservedRunningTime="2025-01-13 21:13:43.365374461 +0000 UTC m=+7.606911002" Jan 13 21:13:49.414422 kubelet[2697]: I0113 21:13:49.414392 2697 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:13:49.415252 containerd[1465]: time="2025-01-13T21:13:49.414921235Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:13:49.416415 kubelet[2697]: I0113 21:13:49.415859 2697 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:13:50.365962 kubelet[2697]: I0113 21:13:50.365879 2697 topology_manager.go:215] "Topology Admit Handler" podUID="65ebd03a-d04e-4618-89d4-7ae7ad20e3c9" podNamespace="kube-system" podName="kube-proxy-m9r7c" Jan 13 21:13:50.395683 systemd[1]: Created slice kubepods-besteffort-pod65ebd03a_d04e_4618_89d4_7ae7ad20e3c9.slice - libcontainer container kubepods-besteffort-pod65ebd03a_d04e_4618_89d4_7ae7ad20e3c9.slice. Jan 13 21:13:50.405679 kubelet[2697]: I0113 21:13:50.405634 2697 topology_manager.go:215] "Topology Admit Handler" podUID="b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" podNamespace="kube-system" podName="cilium-mjbkj" Jan 13 21:13:50.418748 systemd[1]: Created slice kubepods-burstable-podb2ff82b8_1d0c_44d8_85b4_20ccf2ba07ca.slice - libcontainer container kubepods-burstable-podb2ff82b8_1d0c_44d8_85b4_20ccf2ba07ca.slice. Jan 13 21:13:50.461844 kubelet[2697]: I0113 21:13:50.461734 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-hostproc\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.461844 kubelet[2697]: I0113 21:13:50.461774 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-xtables-lock\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.461844 kubelet[2697]: I0113 21:13:50.461796 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-hubble-tls\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.461844 kubelet[2697]: I0113 21:13:50.461819 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-host-proc-sys-kernel\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.461844 kubelet[2697]: I0113 21:13:50.461855 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65ebd03a-d04e-4618-89d4-7ae7ad20e3c9-xtables-lock\") pod \"kube-proxy-m9r7c\" (UID: \"65ebd03a-d04e-4618-89d4-7ae7ad20e3c9\") " pod="kube-system/kube-proxy-m9r7c" Jan 13 21:13:50.462504 kubelet[2697]: I0113 21:13:50.461889 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-run\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462504 kubelet[2697]: I0113 21:13:50.461913 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-etc-cni-netd\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462504 kubelet[2697]: I0113 21:13:50.461931 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-config-path\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462504 kubelet[2697]: I0113 21:13:50.461984 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-cgroup\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462504 kubelet[2697]: I0113 21:13:50.462029 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cni-path\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462504 kubelet[2697]: I0113 21:13:50.462048 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k79t2\" (UniqueName: \"kubernetes.io/projected/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-kube-api-access-k79t2\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462660 kubelet[2697]: I0113 21:13:50.462067 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65ebd03a-d04e-4618-89d4-7ae7ad20e3c9-kube-proxy\") pod \"kube-proxy-m9r7c\" (UID: \"65ebd03a-d04e-4618-89d4-7ae7ad20e3c9\") " pod="kube-system/kube-proxy-m9r7c" Jan 13 21:13:50.462660 kubelet[2697]: I0113 21:13:50.462084 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-bpf-maps\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462660 kubelet[2697]: I0113 21:13:50.462102 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-lib-modules\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462660 kubelet[2697]: I0113 21:13:50.462120 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-clustermesh-secrets\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.462660 kubelet[2697]: I0113 21:13:50.462137 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65ebd03a-d04e-4618-89d4-7ae7ad20e3c9-lib-modules\") pod \"kube-proxy-m9r7c\" (UID: \"65ebd03a-d04e-4618-89d4-7ae7ad20e3c9\") " pod="kube-system/kube-proxy-m9r7c" Jan 13 21:13:50.462660 kubelet[2697]: I0113 21:13:50.462154 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78zcx\" (UniqueName: \"kubernetes.io/projected/65ebd03a-d04e-4618-89d4-7ae7ad20e3c9-kube-api-access-78zcx\") pod \"kube-proxy-m9r7c\" (UID: \"65ebd03a-d04e-4618-89d4-7ae7ad20e3c9\") " pod="kube-system/kube-proxy-m9r7c" Jan 13 21:13:50.462845 kubelet[2697]: I0113 21:13:50.462172 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-host-proc-sys-net\") pod \"cilium-mjbkj\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " pod="kube-system/cilium-mjbkj" Jan 13 21:13:50.558252 kubelet[2697]: I0113 21:13:50.558176 2697 topology_manager.go:215] "Topology Admit Handler" podUID="b14235e6-6427-4dbc-b1c7-8092fd05e624" podNamespace="kube-system" podName="cilium-operator-599987898-xm7l4" Jan 13 21:13:50.566059 kubelet[2697]: I0113 21:13:50.563340 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmqr6\" (UniqueName: \"kubernetes.io/projected/b14235e6-6427-4dbc-b1c7-8092fd05e624-kube-api-access-wmqr6\") pod \"cilium-operator-599987898-xm7l4\" (UID: \"b14235e6-6427-4dbc-b1c7-8092fd05e624\") " pod="kube-system/cilium-operator-599987898-xm7l4" Jan 13 21:13:50.566059 kubelet[2697]: I0113 21:13:50.563471 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b14235e6-6427-4dbc-b1c7-8092fd05e624-cilium-config-path\") pod \"cilium-operator-599987898-xm7l4\" (UID: \"b14235e6-6427-4dbc-b1c7-8092fd05e624\") " pod="kube-system/cilium-operator-599987898-xm7l4" Jan 13 21:13:50.628906 systemd[1]: Created slice kubepods-besteffort-podb14235e6_6427_4dbc_b1c7_8092fd05e624.slice - libcontainer container kubepods-besteffort-podb14235e6_6427_4dbc_b1c7_8092fd05e624.slice. Jan 13 21:13:50.709785 containerd[1465]: time="2025-01-13T21:13:50.709726524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9r7c,Uid:65ebd03a-d04e-4618-89d4-7ae7ad20e3c9,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:50.724722 containerd[1465]: time="2025-01-13T21:13:50.724677419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjbkj,Uid:b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:50.759917 containerd[1465]: time="2025-01-13T21:13:50.759091138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:50.759917 containerd[1465]: time="2025-01-13T21:13:50.759202587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:50.759917 containerd[1465]: time="2025-01-13T21:13:50.759241560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:50.759917 containerd[1465]: time="2025-01-13T21:13:50.759357728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:50.768493 containerd[1465]: time="2025-01-13T21:13:50.768319220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:50.768800 containerd[1465]: time="2025-01-13T21:13:50.768425499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:50.768800 containerd[1465]: time="2025-01-13T21:13:50.768652285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:50.768991 containerd[1465]: time="2025-01-13T21:13:50.768916752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:50.795163 systemd[1]: Started cri-containerd-4cf30ad91c76f070cb6706213d7609bfb4bb43cf0911804fa3410e6162e2516f.scope - libcontainer container 4cf30ad91c76f070cb6706213d7609bfb4bb43cf0911804fa3410e6162e2516f. Jan 13 21:13:50.796283 systemd[1]: Started cri-containerd-55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8.scope - libcontainer container 55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8. Jan 13 21:13:50.829354 containerd[1465]: time="2025-01-13T21:13:50.828466424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjbkj,Uid:b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\"" Jan 13 21:13:50.829354 containerd[1465]: time="2025-01-13T21:13:50.828643656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9r7c,Uid:65ebd03a-d04e-4618-89d4-7ae7ad20e3c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cf30ad91c76f070cb6706213d7609bfb4bb43cf0911804fa3410e6162e2516f\"" Jan 13 21:13:50.833909 containerd[1465]: time="2025-01-13T21:13:50.833065035Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:13:50.836153 containerd[1465]: time="2025-01-13T21:13:50.835938570Z" level=info msg="CreateContainer within sandbox \"4cf30ad91c76f070cb6706213d7609bfb4bb43cf0911804fa3410e6162e2516f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:13:50.858639 containerd[1465]: time="2025-01-13T21:13:50.858574550Z" level=info msg="CreateContainer within sandbox \"4cf30ad91c76f070cb6706213d7609bfb4bb43cf0911804fa3410e6162e2516f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"392d2f888aa58735e55d561e6c5a8d050631fd80be5ca5cb1ebc6cee4f9864d7\"" Jan 13 21:13:50.859361 containerd[1465]: time="2025-01-13T21:13:50.859217327Z" level=info msg="StartContainer for \"392d2f888aa58735e55d561e6c5a8d050631fd80be5ca5cb1ebc6cee4f9864d7\"" Jan 13 21:13:50.889148 systemd[1]: Started cri-containerd-392d2f888aa58735e55d561e6c5a8d050631fd80be5ca5cb1ebc6cee4f9864d7.scope - libcontainer container 392d2f888aa58735e55d561e6c5a8d050631fd80be5ca5cb1ebc6cee4f9864d7. Jan 13 21:13:50.925807 containerd[1465]: time="2025-01-13T21:13:50.925736201Z" level=info msg="StartContainer for \"392d2f888aa58735e55d561e6c5a8d050631fd80be5ca5cb1ebc6cee4f9864d7\" returns successfully" Jan 13 21:13:50.950102 containerd[1465]: time="2025-01-13T21:13:50.949709492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xm7l4,Uid:b14235e6-6427-4dbc-b1c7-8092fd05e624,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:50.983393 kubelet[2697]: I0113 21:13:50.983082 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m9r7c" podStartSLOduration=0.983065164 podStartE2EDuration="983.065164ms" podCreationTimestamp="2025-01-13 21:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:13:50.982946021 +0000 UTC m=+15.224482532" watchObservedRunningTime="2025-01-13 21:13:50.983065164 +0000 UTC m=+15.224601655" Jan 13 21:13:51.000026 containerd[1465]: time="2025-01-13T21:13:50.996364989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:51.000026 containerd[1465]: time="2025-01-13T21:13:50.996417858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:51.000026 containerd[1465]: time="2025-01-13T21:13:50.996432906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:51.000026 containerd[1465]: time="2025-01-13T21:13:50.996898210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:51.015164 systemd[1]: Started cri-containerd-feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7.scope - libcontainer container feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7. Jan 13 21:13:51.062992 containerd[1465]: time="2025-01-13T21:13:51.062931827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xm7l4,Uid:b14235e6-6427-4dbc-b1c7-8092fd05e624,Namespace:kube-system,Attempt:0,} returns sandbox id \"feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7\"" Jan 13 21:14:06.628759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997151296.mount: Deactivated successfully. Jan 13 21:14:09.173890 containerd[1465]: time="2025-01-13T21:14:09.173848340Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:09.175315 containerd[1465]: time="2025-01-13T21:14:09.175290145Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166733483" Jan 13 21:14:09.177183 containerd[1465]: time="2025-01-13T21:14:09.177136920Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:09.180141 containerd[1465]: time="2025-01-13T21:14:09.180063549Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.346935806s" Jan 13 21:14:09.180200 containerd[1465]: time="2025-01-13T21:14:09.180148539Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:14:09.183724 containerd[1465]: time="2025-01-13T21:14:09.183481822Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:14:09.185746 containerd[1465]: time="2025-01-13T21:14:09.185718849Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:14:09.210099 containerd[1465]: time="2025-01-13T21:14:09.209968200Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69\"" Jan 13 21:14:09.212820 containerd[1465]: time="2025-01-13T21:14:09.212759016Z" level=info msg="StartContainer for \"77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69\"" Jan 13 21:14:09.215809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910106854.mount: Deactivated successfully. Jan 13 21:14:09.261161 systemd[1]: Started cri-containerd-77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69.scope - libcontainer container 77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69. Jan 13 21:14:09.289238 containerd[1465]: time="2025-01-13T21:14:09.289201239Z" level=info msg="StartContainer for \"77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69\" returns successfully" Jan 13 21:14:09.300712 systemd[1]: cri-containerd-77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69.scope: Deactivated successfully. Jan 13 21:14:10.203324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69-rootfs.mount: Deactivated successfully. Jan 13 21:14:10.583336 containerd[1465]: time="2025-01-13T21:14:10.583225128Z" level=info msg="shim disconnected" id=77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69 namespace=k8s.io Jan 13 21:14:10.584447 containerd[1465]: time="2025-01-13T21:14:10.583336307Z" level=warning msg="cleaning up after shim disconnected" id=77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69 namespace=k8s.io Jan 13 21:14:10.584447 containerd[1465]: time="2025-01-13T21:14:10.583357787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:11.046512 containerd[1465]: time="2025-01-13T21:14:11.046242692Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:14:11.083150 containerd[1465]: time="2025-01-13T21:14:11.082825528Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c\"" Jan 13 21:14:11.087186 containerd[1465]: time="2025-01-13T21:14:11.087083005Z" level=info msg="StartContainer for \"5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c\"" Jan 13 21:14:11.148164 systemd[1]: Started cri-containerd-5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c.scope - libcontainer container 5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c. Jan 13 21:14:11.174241 containerd[1465]: time="2025-01-13T21:14:11.174124658Z" level=info msg="StartContainer for \"5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c\" returns successfully" Jan 13 21:14:11.185913 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:14:11.187338 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:14:11.187422 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:14:11.195409 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:14:11.195792 systemd[1]: cri-containerd-5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c.scope: Deactivated successfully. Jan 13 21:14:11.201807 systemd[1]: run-containerd-runc-k8s.io-5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c-runc.iwiaqg.mount: Deactivated successfully. Jan 13 21:14:11.215592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:14:11.225772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c-rootfs.mount: Deactivated successfully. Jan 13 21:14:11.237074 containerd[1465]: time="2025-01-13T21:14:11.236814980Z" level=info msg="shim disconnected" id=5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c namespace=k8s.io Jan 13 21:14:11.237074 containerd[1465]: time="2025-01-13T21:14:11.236894149Z" level=warning msg="cleaning up after shim disconnected" id=5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c namespace=k8s.io Jan 13 21:14:11.237074 containerd[1465]: time="2025-01-13T21:14:11.236909428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:12.044321 containerd[1465]: time="2025-01-13T21:14:12.044189746Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:14:12.128386 containerd[1465]: time="2025-01-13T21:14:12.128154455Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3\"" Jan 13 21:14:12.130387 containerd[1465]: time="2025-01-13T21:14:12.130343401Z" level=info msg="StartContainer for \"e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3\"" Jan 13 21:14:12.176923 systemd[1]: Started cri-containerd-e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3.scope - libcontainer container e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3. Jan 13 21:14:12.237492 systemd[1]: cri-containerd-e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3.scope: Deactivated successfully. Jan 13 21:14:12.243458 containerd[1465]: time="2025-01-13T21:14:12.243429910Z" level=info msg="StartContainer for \"e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3\" returns successfully" Jan 13 21:14:12.285321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3-rootfs.mount: Deactivated successfully. Jan 13 21:14:12.376961 containerd[1465]: time="2025-01-13T21:14:12.376816688Z" level=info msg="shim disconnected" id=e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3 namespace=k8s.io Jan 13 21:14:12.376961 containerd[1465]: time="2025-01-13T21:14:12.376897680Z" level=warning msg="cleaning up after shim disconnected" id=e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3 namespace=k8s.io Jan 13 21:14:12.376961 containerd[1465]: time="2025-01-13T21:14:12.376909112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:12.953899 containerd[1465]: time="2025-01-13T21:14:12.953841365Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:12.955585 containerd[1465]: time="2025-01-13T21:14:12.955365122Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907221" Jan 13 21:14:12.956903 containerd[1465]: time="2025-01-13T21:14:12.956840250Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:12.958525 containerd[1465]: time="2025-01-13T21:14:12.958398363Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.774883449s" Jan 13 21:14:12.958525 containerd[1465]: time="2025-01-13T21:14:12.958439861Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:14:12.961082 containerd[1465]: time="2025-01-13T21:14:12.961013358Z" level=info msg="CreateContainer within sandbox \"feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:14:12.988167 containerd[1465]: time="2025-01-13T21:14:12.988064283Z" level=info msg="CreateContainer within sandbox \"feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\"" Jan 13 21:14:12.989534 containerd[1465]: time="2025-01-13T21:14:12.988580691Z" level=info msg="StartContainer for \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\"" Jan 13 21:14:13.021167 systemd[1]: Started cri-containerd-4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067.scope - libcontainer container 4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067. Jan 13 21:14:13.049949 containerd[1465]: time="2025-01-13T21:14:13.049896732Z" level=info msg="StartContainer for \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\" returns successfully" Jan 13 21:14:13.058296 containerd[1465]: time="2025-01-13T21:14:13.058122924Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:14:13.085281 containerd[1465]: time="2025-01-13T21:14:13.085215065Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7\"" Jan 13 21:14:13.087180 containerd[1465]: time="2025-01-13T21:14:13.086227395Z" level=info msg="StartContainer for \"f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7\"" Jan 13 21:14:13.116248 systemd[1]: Started cri-containerd-f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7.scope - libcontainer container f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7. Jan 13 21:14:13.145878 systemd[1]: cri-containerd-f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7.scope: Deactivated successfully. Jan 13 21:14:13.153040 containerd[1465]: time="2025-01-13T21:14:13.152675121Z" level=info msg="StartContainer for \"f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7\" returns successfully" Jan 13 21:14:13.470363 containerd[1465]: time="2025-01-13T21:14:13.470220033Z" level=info msg="shim disconnected" id=f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7 namespace=k8s.io Jan 13 21:14:13.470363 containerd[1465]: time="2025-01-13T21:14:13.470314450Z" level=warning msg="cleaning up after shim disconnected" id=f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7 namespace=k8s.io Jan 13 21:14:13.470363 containerd[1465]: time="2025-01-13T21:14:13.470336832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:14.070932 containerd[1465]: time="2025-01-13T21:14:14.070884258Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:14:14.101855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502671596.mount: Deactivated successfully. Jan 13 21:14:14.106077 kubelet[2697]: I0113 21:14:14.104410 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xm7l4" podStartSLOduration=2.209350046 podStartE2EDuration="24.104390561s" podCreationTimestamp="2025-01-13 21:13:50 +0000 UTC" firstStartedPulling="2025-01-13 21:13:51.064423196 +0000 UTC m=+15.305959687" lastFinishedPulling="2025-01-13 21:14:12.959463701 +0000 UTC m=+37.201000202" observedRunningTime="2025-01-13 21:14:14.094723576 +0000 UTC m=+38.336260097" watchObservedRunningTime="2025-01-13 21:14:14.104390561 +0000 UTC m=+38.345927052" Jan 13 21:14:14.107859 containerd[1465]: time="2025-01-13T21:14:14.107818401Z" level=info msg="CreateContainer within sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\"" Jan 13 21:14:14.109184 containerd[1465]: time="2025-01-13T21:14:14.109157483Z" level=info msg="StartContainer for \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\"" Jan 13 21:14:14.180220 systemd[1]: Started cri-containerd-4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da.scope - libcontainer container 4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da. Jan 13 21:14:14.278522 containerd[1465]: time="2025-01-13T21:14:14.278477491Z" level=info msg="StartContainer for \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\" returns successfully" Jan 13 21:14:14.374097 kubelet[2697]: I0113 21:14:14.374068 2697 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:14:14.408340 kubelet[2697]: I0113 21:14:14.408143 2697 topology_manager.go:215] "Topology Admit Handler" podUID="dbf7ca3e-5246-4628-8edd-53cbdc2c9b81" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s75f6" Jan 13 21:14:14.416189 kubelet[2697]: I0113 21:14:14.415944 2697 topology_manager.go:215] "Topology Admit Handler" podUID="10345a83-6b64-4c8b-bcdd-b93976501456" podNamespace="kube-system" podName="coredns-7db6d8ff4d-57g7v" Jan 13 21:14:14.418838 systemd[1]: Created slice kubepods-burstable-poddbf7ca3e_5246_4628_8edd_53cbdc2c9b81.slice - libcontainer container kubepods-burstable-poddbf7ca3e_5246_4628_8edd_53cbdc2c9b81.slice. Jan 13 21:14:14.429399 systemd[1]: Created slice kubepods-burstable-pod10345a83_6b64_4c8b_bcdd_b93976501456.slice - libcontainer container kubepods-burstable-pod10345a83_6b64_4c8b_bcdd_b93976501456.slice. Jan 13 21:14:14.435482 kubelet[2697]: I0113 21:14:14.435448 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10345a83-6b64-4c8b-bcdd-b93976501456-config-volume\") pod \"coredns-7db6d8ff4d-57g7v\" (UID: \"10345a83-6b64-4c8b-bcdd-b93976501456\") " pod="kube-system/coredns-7db6d8ff4d-57g7v" Jan 13 21:14:14.435721 kubelet[2697]: I0113 21:14:14.435485 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbf7ca3e-5246-4628-8edd-53cbdc2c9b81-config-volume\") pod \"coredns-7db6d8ff4d-s75f6\" (UID: \"dbf7ca3e-5246-4628-8edd-53cbdc2c9b81\") " pod="kube-system/coredns-7db6d8ff4d-s75f6" Jan 13 21:14:14.435721 kubelet[2697]: I0113 21:14:14.435511 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s72tg\" (UniqueName: \"kubernetes.io/projected/10345a83-6b64-4c8b-bcdd-b93976501456-kube-api-access-s72tg\") pod \"coredns-7db6d8ff4d-57g7v\" (UID: \"10345a83-6b64-4c8b-bcdd-b93976501456\") " pod="kube-system/coredns-7db6d8ff4d-57g7v" Jan 13 21:14:14.435721 kubelet[2697]: I0113 21:14:14.435555 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwbx\" (UniqueName: \"kubernetes.io/projected/dbf7ca3e-5246-4628-8edd-53cbdc2c9b81-kube-api-access-5mwbx\") pod \"coredns-7db6d8ff4d-s75f6\" (UID: \"dbf7ca3e-5246-4628-8edd-53cbdc2c9b81\") " pod="kube-system/coredns-7db6d8ff4d-s75f6" Jan 13 21:14:14.725367 containerd[1465]: time="2025-01-13T21:14:14.724482306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s75f6,Uid:dbf7ca3e-5246-4628-8edd-53cbdc2c9b81,Namespace:kube-system,Attempt:0,}" Jan 13 21:14:14.733801 containerd[1465]: time="2025-01-13T21:14:14.733738980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-57g7v,Uid:10345a83-6b64-4c8b-bcdd-b93976501456,Namespace:kube-system,Attempt:0,}" Jan 13 21:14:17.284499 systemd-networkd[1378]: cilium_host: Link UP Jan 13 21:14:17.284903 systemd-networkd[1378]: cilium_net: Link UP Jan 13 21:14:17.291060 systemd-networkd[1378]: cilium_net: Gained carrier Jan 13 21:14:17.291767 systemd-networkd[1378]: cilium_host: Gained carrier Jan 13 21:14:17.397323 systemd-networkd[1378]: cilium_vxlan: Link UP Jan 13 21:14:17.397464 systemd-networkd[1378]: cilium_vxlan: Gained carrier Jan 13 21:14:17.660322 kernel: NET: Registered PF_ALG protocol family Jan 13 21:14:17.829166 systemd-networkd[1378]: cilium_host: Gained IPv6LL Jan 13 21:14:17.893300 systemd-networkd[1378]: cilium_net: Gained IPv6LL Jan 13 21:14:18.447453 systemd-networkd[1378]: lxc_health: Link UP Jan 13 21:14:18.463469 systemd-networkd[1378]: lxc_health: Gained carrier Jan 13 21:14:18.752915 kubelet[2697]: I0113 21:14:18.752726 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mjbkj" podStartSLOduration=10.402752464 podStartE2EDuration="28.752710873s" podCreationTimestamp="2025-01-13 21:13:50 +0000 UTC" firstStartedPulling="2025-01-13 21:13:50.831493966 +0000 UTC m=+15.073030467" lastFinishedPulling="2025-01-13 21:14:09.181452335 +0000 UTC m=+33.422988876" observedRunningTime="2025-01-13 21:14:15.118662444 +0000 UTC m=+39.360199035" watchObservedRunningTime="2025-01-13 21:14:18.752710873 +0000 UTC m=+42.994247374" Jan 13 21:14:18.803270 systemd-networkd[1378]: lxc9d4afaa9e82e: Link UP Jan 13 21:14:18.810203 kernel: eth0: renamed from tmpee0fe Jan 13 21:14:18.819092 systemd-networkd[1378]: lxc9d4afaa9e82e: Gained carrier Jan 13 21:14:18.826881 systemd-networkd[1378]: lxc409f3252c63d: Link UP Jan 13 21:14:18.837158 kernel: eth0: renamed from tmp9422a Jan 13 21:14:18.843988 systemd-networkd[1378]: lxc409f3252c63d: Gained carrier Jan 13 21:14:18.853475 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Jan 13 21:14:20.005239 systemd-networkd[1378]: lxc9d4afaa9e82e: Gained IPv6LL Jan 13 21:14:20.325197 systemd-networkd[1378]: lxc_health: Gained IPv6LL Jan 13 21:14:20.709366 systemd-networkd[1378]: lxc409f3252c63d: Gained IPv6LL Jan 13 21:14:23.466158 containerd[1465]: time="2025-01-13T21:14:23.465871363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:23.466158 containerd[1465]: time="2025-01-13T21:14:23.465928170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:23.466158 containerd[1465]: time="2025-01-13T21:14:23.465942547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:23.466158 containerd[1465]: time="2025-01-13T21:14:23.466061691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:23.501768 systemd[1]: Started cri-containerd-ee0feb39aedc98fdc3673d1720196a9ee0af1c48a228a3d6c10f3b4c18d0feb8.scope - libcontainer container ee0feb39aedc98fdc3673d1720196a9ee0af1c48a228a3d6c10f3b4c18d0feb8. Jan 13 21:14:23.544923 containerd[1465]: time="2025-01-13T21:14:23.544818411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:23.544923 containerd[1465]: time="2025-01-13T21:14:23.544893181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:23.545226 containerd[1465]: time="2025-01-13T21:14:23.545097154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:23.545362 containerd[1465]: time="2025-01-13T21:14:23.545291249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:23.575770 containerd[1465]: time="2025-01-13T21:14:23.575558275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s75f6,Uid:dbf7ca3e-5246-4628-8edd-53cbdc2c9b81,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee0feb39aedc98fdc3673d1720196a9ee0af1c48a228a3d6c10f3b4c18d0feb8\"" Jan 13 21:14:23.582213 systemd[1]: Started cri-containerd-9422a58a8fa66b4af87cca6d8f7678ab40b163f6efc9cdb682b149eac8da76c2.scope - libcontainer container 9422a58a8fa66b4af87cca6d8f7678ab40b163f6efc9cdb682b149eac8da76c2. Jan 13 21:14:23.583638 containerd[1465]: time="2025-01-13T21:14:23.583245208Z" level=info msg="CreateContainer within sandbox \"ee0feb39aedc98fdc3673d1720196a9ee0af1c48a228a3d6c10f3b4c18d0feb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:14:23.613332 containerd[1465]: time="2025-01-13T21:14:23.612658761Z" level=info msg="CreateContainer within sandbox \"ee0feb39aedc98fdc3673d1720196a9ee0af1c48a228a3d6c10f3b4c18d0feb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"943bde0c3936d509d2d583be98f217910438a2081e0bc4d1365684c84fcd50fd\"" Jan 13 21:14:23.613499 containerd[1465]: time="2025-01-13T21:14:23.613471197Z" level=info msg="StartContainer for \"943bde0c3936d509d2d583be98f217910438a2081e0bc4d1365684c84fcd50fd\"" Jan 13 21:14:23.646179 systemd[1]: Started cri-containerd-943bde0c3936d509d2d583be98f217910438a2081e0bc4d1365684c84fcd50fd.scope - libcontainer container 943bde0c3936d509d2d583be98f217910438a2081e0bc4d1365684c84fcd50fd. Jan 13 21:14:23.675739 containerd[1465]: time="2025-01-13T21:14:23.675660747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-57g7v,Uid:10345a83-6b64-4c8b-bcdd-b93976501456,Namespace:kube-system,Attempt:0,} returns sandbox id \"9422a58a8fa66b4af87cca6d8f7678ab40b163f6efc9cdb682b149eac8da76c2\"" Jan 13 21:14:23.683499 containerd[1465]: time="2025-01-13T21:14:23.683447990Z" level=info msg="CreateContainer within sandbox \"9422a58a8fa66b4af87cca6d8f7678ab40b163f6efc9cdb682b149eac8da76c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:14:23.712722 containerd[1465]: time="2025-01-13T21:14:23.712652441Z" level=info msg="StartContainer for \"943bde0c3936d509d2d583be98f217910438a2081e0bc4d1365684c84fcd50fd\" returns successfully" Jan 13 21:14:23.723903 containerd[1465]: time="2025-01-13T21:14:23.723130036Z" level=info msg="CreateContainer within sandbox \"9422a58a8fa66b4af87cca6d8f7678ab40b163f6efc9cdb682b149eac8da76c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22a64396fa413192640391d5330786feee42c00f8ac8597b9140f1cbc8881918\"" Jan 13 21:14:23.724497 containerd[1465]: time="2025-01-13T21:14:23.724347873Z" level=info msg="StartContainer for \"22a64396fa413192640391d5330786feee42c00f8ac8597b9140f1cbc8881918\"" Jan 13 21:14:23.763087 systemd[1]: Started cri-containerd-22a64396fa413192640391d5330786feee42c00f8ac8597b9140f1cbc8881918.scope - libcontainer container 22a64396fa413192640391d5330786feee42c00f8ac8597b9140f1cbc8881918. Jan 13 21:14:23.790827 containerd[1465]: time="2025-01-13T21:14:23.790778987Z" level=info msg="StartContainer for \"22a64396fa413192640391d5330786feee42c00f8ac8597b9140f1cbc8881918\" returns successfully" Jan 13 21:14:24.124847 kubelet[2697]: I0113 21:14:24.123090 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-57g7v" podStartSLOduration=34.123071712 podStartE2EDuration="34.123071712s" podCreationTimestamp="2025-01-13 21:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:14:24.120800307 +0000 UTC m=+48.362336818" watchObservedRunningTime="2025-01-13 21:14:24.123071712 +0000 UTC m=+48.364608233" Jan 13 21:14:24.168426 kubelet[2697]: I0113 21:14:24.167235 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s75f6" podStartSLOduration=34.167217516 podStartE2EDuration="34.167217516s" podCreationTimestamp="2025-01-13 21:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:14:24.14516617 +0000 UTC m=+48.386702681" watchObservedRunningTime="2025-01-13 21:14:24.167217516 +0000 UTC m=+48.408754017" Jan 13 21:14:49.069797 systemd[1]: Started sshd@7-172.24.4.27:22-172.24.4.1:53146.service - OpenSSH per-connection server daemon (172.24.4.1:53146). Jan 13 21:14:50.386655 sshd[4060]: Accepted publickey for core from 172.24.4.1 port 53146 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:14:50.388857 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:50.396491 systemd-logind[1440]: New session 10 of user core. Jan 13 21:14:50.403310 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:14:51.118082 sshd[4062]: Connection closed by 172.24.4.1 port 53146 Jan 13 21:14:51.118752 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Jan 13 21:14:51.124581 systemd[1]: sshd@7-172.24.4.27:22-172.24.4.1:53146.service: Deactivated successfully. Jan 13 21:14:51.124768 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:14:51.127801 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:14:51.131141 systemd-logind[1440]: Removed session 10. Jan 13 21:14:56.139617 systemd[1]: Started sshd@8-172.24.4.27:22-172.24.4.1:47072.service - OpenSSH per-connection server daemon (172.24.4.1:47072). Jan 13 21:14:57.311858 sshd[4076]: Accepted publickey for core from 172.24.4.1 port 47072 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:14:57.314510 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:14:57.326125 systemd-logind[1440]: New session 11 of user core. Jan 13 21:14:57.332321 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:14:57.999771 sshd[4078]: Connection closed by 172.24.4.1 port 47072 Jan 13 21:14:58.000888 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Jan 13 21:14:58.008804 systemd[1]: sshd@8-172.24.4.27:22-172.24.4.1:47072.service: Deactivated successfully. Jan 13 21:14:58.015203 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:14:58.017927 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:14:58.020523 systemd-logind[1440]: Removed session 11. Jan 13 21:15:03.021981 systemd[1]: Started sshd@9-172.24.4.27:22-172.24.4.1:47082.service - OpenSSH per-connection server daemon (172.24.4.1:47082). Jan 13 21:15:04.390507 sshd[4091]: Accepted publickey for core from 172.24.4.1 port 47082 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:04.393248 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:04.405651 systemd-logind[1440]: New session 12 of user core. Jan 13 21:15:04.411305 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:15:05.206380 sshd[4093]: Connection closed by 172.24.4.1 port 47082 Jan 13 21:15:05.207905 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:05.216813 systemd[1]: sshd@9-172.24.4.27:22-172.24.4.1:47082.service: Deactivated successfully. Jan 13 21:15:05.221276 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:15:05.224508 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:15:05.228298 systemd-logind[1440]: Removed session 12. Jan 13 21:15:10.228622 systemd[1]: Started sshd@10-172.24.4.27:22-172.24.4.1:50144.service - OpenSSH per-connection server daemon (172.24.4.1:50144). Jan 13 21:15:11.478274 sshd[4105]: Accepted publickey for core from 172.24.4.1 port 50144 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:11.480946 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:11.493318 systemd-logind[1440]: New session 13 of user core. Jan 13 21:15:11.499535 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:15:12.220378 sshd[4107]: Connection closed by 172.24.4.1 port 50144 Jan 13 21:15:12.221526 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:12.235446 systemd[1]: sshd@10-172.24.4.27:22-172.24.4.1:50144.service: Deactivated successfully. Jan 13 21:15:12.238399 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:15:12.241465 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:15:12.248673 systemd[1]: Started sshd@11-172.24.4.27:22-172.24.4.1:50154.service - OpenSSH per-connection server daemon (172.24.4.1:50154). Jan 13 21:15:12.251418 systemd-logind[1440]: Removed session 13. Jan 13 21:15:13.562354 sshd[4118]: Accepted publickey for core from 172.24.4.1 port 50154 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:13.565269 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:13.576436 systemd-logind[1440]: New session 14 of user core. Jan 13 21:15:13.584312 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:15:14.356896 sshd[4122]: Connection closed by 172.24.4.1 port 50154 Jan 13 21:15:14.358224 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:14.369655 systemd[1]: sshd@11-172.24.4.27:22-172.24.4.1:50154.service: Deactivated successfully. Jan 13 21:15:14.374269 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:15:14.377459 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:15:14.387710 systemd[1]: Started sshd@12-172.24.4.27:22-172.24.4.1:53400.service - OpenSSH per-connection server daemon (172.24.4.1:53400). Jan 13 21:15:14.392228 systemd-logind[1440]: Removed session 14. Jan 13 21:15:15.650820 sshd[4130]: Accepted publickey for core from 172.24.4.1 port 53400 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:15.654464 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:15.668176 systemd-logind[1440]: New session 15 of user core. Jan 13 21:15:15.678321 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:15:16.302832 sshd[4132]: Connection closed by 172.24.4.1 port 53400 Jan 13 21:15:16.304063 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:16.321503 systemd[1]: sshd@12-172.24.4.27:22-172.24.4.1:53400.service: Deactivated successfully. Jan 13 21:15:16.328820 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:15:16.332182 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:15:16.339831 systemd-logind[1440]: Removed session 15. Jan 13 21:15:21.323680 systemd[1]: Started sshd@13-172.24.4.27:22-172.24.4.1:53408.service - OpenSSH per-connection server daemon (172.24.4.1:53408). Jan 13 21:15:22.578472 sshd[4145]: Accepted publickey for core from 172.24.4.1 port 53408 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:22.581645 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:22.594196 systemd-logind[1440]: New session 16 of user core. Jan 13 21:15:22.600411 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:15:23.321049 sshd[4147]: Connection closed by 172.24.4.1 port 53408 Jan 13 21:15:23.321604 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:23.331885 systemd[1]: sshd@13-172.24.4.27:22-172.24.4.1:53408.service: Deactivated successfully. Jan 13 21:15:23.335091 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:15:23.336787 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:15:23.344599 systemd[1]: Started sshd@14-172.24.4.27:22-172.24.4.1:53416.service - OpenSSH per-connection server daemon (172.24.4.1:53416). Jan 13 21:15:23.347402 systemd-logind[1440]: Removed session 16. Jan 13 21:15:24.612253 sshd[4157]: Accepted publickey for core from 172.24.4.1 port 53416 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:24.614828 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:24.625097 systemd-logind[1440]: New session 17 of user core. Jan 13 21:15:24.635315 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:15:25.348972 sshd[4159]: Connection closed by 172.24.4.1 port 53416 Jan 13 21:15:25.351335 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:25.361547 systemd[1]: sshd@14-172.24.4.27:22-172.24.4.1:53416.service: Deactivated successfully. Jan 13 21:15:25.365539 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:15:25.368241 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:15:25.376681 systemd[1]: Started sshd@15-172.24.4.27:22-172.24.4.1:45742.service - OpenSSH per-connection server daemon (172.24.4.1:45742). Jan 13 21:15:25.380679 systemd-logind[1440]: Removed session 17. Jan 13 21:15:26.619102 sshd[4167]: Accepted publickey for core from 172.24.4.1 port 45742 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:26.691387 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:26.703717 systemd-logind[1440]: New session 18 of user core. Jan 13 21:15:26.710428 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:15:29.371048 sshd[4169]: Connection closed by 172.24.4.1 port 45742 Jan 13 21:15:29.371478 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:29.386263 systemd[1]: sshd@15-172.24.4.27:22-172.24.4.1:45742.service: Deactivated successfully. Jan 13 21:15:29.391686 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:15:29.394591 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:15:29.408653 systemd[1]: Started sshd@16-172.24.4.27:22-172.24.4.1:45752.service - OpenSSH per-connection server daemon (172.24.4.1:45752). Jan 13 21:15:29.412204 systemd-logind[1440]: Removed session 18. Jan 13 21:15:30.716632 sshd[4187]: Accepted publickey for core from 172.24.4.1 port 45752 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:30.720193 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:30.734133 systemd-logind[1440]: New session 19 of user core. Jan 13 21:15:30.740416 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:15:31.738638 sshd[4189]: Connection closed by 172.24.4.1 port 45752 Jan 13 21:15:31.746861 systemd[1]: Started sshd@17-172.24.4.27:22-172.24.4.1:45760.service - OpenSSH per-connection server daemon (172.24.4.1:45760). Jan 13 21:15:31.808471 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:31.811246 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:15:31.812383 systemd[1]: sshd@16-172.24.4.27:22-172.24.4.1:45752.service: Deactivated successfully. Jan 13 21:15:31.815469 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:15:31.818472 systemd-logind[1440]: Removed session 19. Jan 13 21:15:33.201619 sshd[4196]: Accepted publickey for core from 172.24.4.1 port 45760 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:33.204315 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:33.214784 systemd-logind[1440]: New session 20 of user core. Jan 13 21:15:33.222704 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:15:33.931833 sshd[4200]: Connection closed by 172.24.4.1 port 45760 Jan 13 21:15:33.933387 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:33.942851 systemd[1]: sshd@17-172.24.4.27:22-172.24.4.1:45760.service: Deactivated successfully. Jan 13 21:15:33.947822 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:15:33.950906 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:15:33.954388 systemd-logind[1440]: Removed session 20. Jan 13 21:15:38.960724 systemd[1]: Started sshd@18-172.24.4.27:22-172.24.4.1:58520.service - OpenSSH per-connection server daemon (172.24.4.1:58520). Jan 13 21:15:40.315841 sshd[4216]: Accepted publickey for core from 172.24.4.1 port 58520 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:40.318692 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:40.328110 systemd-logind[1440]: New session 21 of user core. Jan 13 21:15:40.340435 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:15:41.043846 sshd[4218]: Connection closed by 172.24.4.1 port 58520 Jan 13 21:15:41.044878 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:41.051181 systemd[1]: sshd@18-172.24.4.27:22-172.24.4.1:58520.service: Deactivated successfully. Jan 13 21:15:41.053607 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:15:41.054801 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:15:41.057139 systemd-logind[1440]: Removed session 21. Jan 13 21:15:46.067637 systemd[1]: Started sshd@19-172.24.4.27:22-172.24.4.1:48720.service - OpenSSH per-connection server daemon (172.24.4.1:48720). Jan 13 21:15:47.308793 sshd[4229]: Accepted publickey for core from 172.24.4.1 port 48720 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:47.311367 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:47.318974 systemd-logind[1440]: New session 22 of user core. Jan 13 21:15:47.331317 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:15:47.997458 sshd[4231]: Connection closed by 172.24.4.1 port 48720 Jan 13 21:15:47.998103 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:48.002964 systemd[1]: sshd@19-172.24.4.27:22-172.24.4.1:48720.service: Deactivated successfully. Jan 13 21:15:48.006869 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:15:48.009157 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:15:48.010866 systemd-logind[1440]: Removed session 22. Jan 13 21:15:53.018862 systemd[1]: Started sshd@20-172.24.4.27:22-172.24.4.1:48724.service - OpenSSH per-connection server daemon (172.24.4.1:48724). Jan 13 21:15:54.146332 sshd[4244]: Accepted publickey for core from 172.24.4.1 port 48724 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:54.149562 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:54.161863 systemd-logind[1440]: New session 23 of user core. Jan 13 21:15:54.170354 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:15:54.878262 sshd[4246]: Connection closed by 172.24.4.1 port 48724 Jan 13 21:15:54.879763 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jan 13 21:15:54.888960 systemd[1]: sshd@20-172.24.4.27:22-172.24.4.1:48724.service: Deactivated successfully. Jan 13 21:15:54.893257 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:15:54.897944 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:15:54.905630 systemd[1]: Started sshd@21-172.24.4.27:22-172.24.4.1:43752.service - OpenSSH per-connection server daemon (172.24.4.1:43752). Jan 13 21:15:54.908601 systemd-logind[1440]: Removed session 23. Jan 13 21:15:56.062479 sshd[4257]: Accepted publickey for core from 172.24.4.1 port 43752 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:15:56.065052 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:15:56.075266 systemd-logind[1440]: New session 24 of user core. Jan 13 21:15:56.086314 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:15:58.251421 containerd[1465]: time="2025-01-13T21:15:58.251309004Z" level=info msg="StopContainer for \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\" with timeout 30 (s)" Jan 13 21:15:58.253340 containerd[1465]: time="2025-01-13T21:15:58.252730430Z" level=info msg="Stop container \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\" with signal terminated" Jan 13 21:15:58.271201 systemd[1]: run-containerd-runc-k8s.io-4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da-runc.spF5BN.mount: Deactivated successfully. Jan 13 21:15:58.274667 systemd[1]: cri-containerd-4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067.scope: Deactivated successfully. Jan 13 21:15:58.291882 containerd[1465]: time="2025-01-13T21:15:58.291724167Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:15:58.302384 containerd[1465]: time="2025-01-13T21:15:58.302198746Z" level=info msg="StopContainer for \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\" with timeout 2 (s)" Jan 13 21:15:58.302714 containerd[1465]: time="2025-01-13T21:15:58.302610989Z" level=info msg="Stop container \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\" with signal terminated" Jan 13 21:15:58.304355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067-rootfs.mount: Deactivated successfully. Jan 13 21:15:58.312218 systemd-networkd[1378]: lxc_health: Link DOWN Jan 13 21:15:58.312639 systemd-networkd[1378]: lxc_health: Lost carrier Jan 13 21:15:58.324290 systemd[1]: cri-containerd-4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da.scope: Deactivated successfully. Jan 13 21:15:58.324658 systemd[1]: cri-containerd-4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da.scope: Consumed 8.430s CPU time. Jan 13 21:15:58.325766 containerd[1465]: time="2025-01-13T21:15:58.325717966Z" level=info msg="shim disconnected" id=4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067 namespace=k8s.io Jan 13 21:15:58.326297 containerd[1465]: time="2025-01-13T21:15:58.325915085Z" level=warning msg="cleaning up after shim disconnected" id=4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067 namespace=k8s.io Jan 13 21:15:58.326297 containerd[1465]: time="2025-01-13T21:15:58.325933931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:15:58.355417 containerd[1465]: time="2025-01-13T21:15:58.355374337Z" level=info msg="StopContainer for \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\" returns successfully" Jan 13 21:15:58.356597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da-rootfs.mount: Deactivated successfully. Jan 13 21:15:58.358321 containerd[1465]: time="2025-01-13T21:15:58.358228702Z" level=info msg="StopPodSandbox for \"feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7\"" Jan 13 21:15:58.358321 containerd[1465]: time="2025-01-13T21:15:58.358290719Z" level=info msg="Container to stop \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:15:58.361136 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7-shm.mount: Deactivated successfully. Jan 13 21:15:58.365165 containerd[1465]: time="2025-01-13T21:15:58.364938679Z" level=info msg="shim disconnected" id=4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da namespace=k8s.io Jan 13 21:15:58.365165 containerd[1465]: time="2025-01-13T21:15:58.365029459Z" level=warning msg="cleaning up after shim disconnected" id=4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da namespace=k8s.io Jan 13 21:15:58.365165 containerd[1465]: time="2025-01-13T21:15:58.365043716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:15:58.371892 systemd[1]: cri-containerd-feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7.scope: Deactivated successfully. Jan 13 21:15:58.411826 containerd[1465]: time="2025-01-13T21:15:58.411712441Z" level=info msg="StopContainer for \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\" returns successfully" Jan 13 21:15:58.415140 containerd[1465]: time="2025-01-13T21:15:58.413154936Z" level=info msg="StopPodSandbox for \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\"" Jan 13 21:15:58.415140 containerd[1465]: time="2025-01-13T21:15:58.413310468Z" level=info msg="Container to stop \"e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:15:58.415140 containerd[1465]: time="2025-01-13T21:15:58.413402731Z" level=info msg="Container to stop \"f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:15:58.415140 containerd[1465]: time="2025-01-13T21:15:58.413420845Z" level=info msg="Container to stop \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:15:58.415140 containerd[1465]: time="2025-01-13T21:15:58.413470128Z" level=info msg="Container to stop \"77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:15:58.415140 containerd[1465]: time="2025-01-13T21:15:58.413513529Z" level=info msg="Container to stop \"5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:15:58.424814 systemd[1]: cri-containerd-55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8.scope: Deactivated successfully. Jan 13 21:15:58.454974 containerd[1465]: time="2025-01-13T21:15:58.454832008Z" level=info msg="shim disconnected" id=feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7 namespace=k8s.io Jan 13 21:15:58.454974 containerd[1465]: time="2025-01-13T21:15:58.454891089Z" level=warning msg="cleaning up after shim disconnected" id=feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7 namespace=k8s.io Jan 13 21:15:58.454974 containerd[1465]: time="2025-01-13T21:15:58.454901238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:15:58.455369 containerd[1465]: time="2025-01-13T21:15:58.454844641Z" level=info msg="shim disconnected" id=55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8 namespace=k8s.io Jan 13 21:15:58.455369 containerd[1465]: time="2025-01-13T21:15:58.455065746Z" level=warning msg="cleaning up after shim disconnected" id=55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8 namespace=k8s.io Jan 13 21:15:58.455369 containerd[1465]: time="2025-01-13T21:15:58.455083529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:15:58.473153 containerd[1465]: time="2025-01-13T21:15:58.472949905Z" level=info msg="TearDown network for sandbox \"feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7\" successfully" Jan 13 21:15:58.473153 containerd[1465]: time="2025-01-13T21:15:58.472985862Z" level=info msg="StopPodSandbox for \"feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7\" returns successfully" Jan 13 21:15:58.473514 containerd[1465]: time="2025-01-13T21:15:58.473178924Z" level=info msg="TearDown network for sandbox \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" successfully" Jan 13 21:15:58.473514 containerd[1465]: time="2025-01-13T21:15:58.473196818Z" level=info msg="StopPodSandbox for \"55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8\" returns successfully" Jan 13 21:15:58.521158 kubelet[2697]: I0113 21:15:58.518564 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-host-proc-sys-net\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.523063 kubelet[2697]: I0113 21:15:58.522127 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.523063 kubelet[2697]: I0113 21:15:58.522259 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-hostproc\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.523063 kubelet[2697]: I0113 21:15:58.522429 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k79t2\" (UniqueName: \"kubernetes.io/projected/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-kube-api-access-k79t2\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.523063 kubelet[2697]: I0113 21:15:58.522539 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-hostproc" (OuterVolumeSpecName: "hostproc") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.523063 kubelet[2697]: I0113 21:15:58.522642 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-clustermesh-secrets\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.523548 kubelet[2697]: I0113 21:15:58.522851 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b14235e6-6427-4dbc-b1c7-8092fd05e624-cilium-config-path\") pod \"b14235e6-6427-4dbc-b1c7-8092fd05e624\" (UID: \"b14235e6-6427-4dbc-b1c7-8092fd05e624\") " Jan 13 21:15:58.528166 kubelet[2697]: I0113 21:15:58.527207 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cni-path\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528166 kubelet[2697]: I0113 21:15:58.527280 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-cgroup\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528166 kubelet[2697]: I0113 21:15:58.527333 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-xtables-lock\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528166 kubelet[2697]: I0113 21:15:58.527386 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-hubble-tls\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528166 kubelet[2697]: I0113 21:15:58.527434 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-run\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528166 kubelet[2697]: I0113 21:15:58.527514 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-host-proc-sys-kernel\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528683 kubelet[2697]: I0113 21:15:58.527572 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmqr6\" (UniqueName: \"kubernetes.io/projected/b14235e6-6427-4dbc-b1c7-8092fd05e624-kube-api-access-wmqr6\") pod \"b14235e6-6427-4dbc-b1c7-8092fd05e624\" (UID: \"b14235e6-6427-4dbc-b1c7-8092fd05e624\") " Jan 13 21:15:58.528683 kubelet[2697]: I0113 21:15:58.527623 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-lib-modules\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528683 kubelet[2697]: I0113 21:15:58.527671 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-etc-cni-netd\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528683 kubelet[2697]: I0113 21:15:58.527724 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-config-path\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528683 kubelet[2697]: I0113 21:15:58.527769 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-bpf-maps\") pod \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\" (UID: \"b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca\") " Jan 13 21:15:58.528683 kubelet[2697]: I0113 21:15:58.527853 2697 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-host-proc-sys-net\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.529159 kubelet[2697]: I0113 21:15:58.527886 2697 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-hostproc\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.529159 kubelet[2697]: I0113 21:15:58.527940 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.531229 kubelet[2697]: I0113 21:15:58.531161 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cni-path" (OuterVolumeSpecName: "cni-path") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.531795 kubelet[2697]: I0113 21:15:58.531622 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.532056 kubelet[2697]: I0113 21:15:58.531729 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.532955 kubelet[2697]: I0113 21:15:58.532867 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.533516 kubelet[2697]: I0113 21:15:58.533413 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.537182 kubelet[2697]: I0113 21:15:58.537140 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.537426 kubelet[2697]: I0113 21:15:58.537384 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:15:58.544864 kubelet[2697]: I0113 21:15:58.544800 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-kube-api-access-k79t2" (OuterVolumeSpecName: "kube-api-access-k79t2") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "kube-api-access-k79t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:15:58.548836 kubelet[2697]: I0113 21:15:58.547369 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:15:58.549387 kubelet[2697]: I0113 21:15:58.549352 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b14235e6-6427-4dbc-b1c7-8092fd05e624-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b14235e6-6427-4dbc-b1c7-8092fd05e624" (UID: "b14235e6-6427-4dbc-b1c7-8092fd05e624"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:15:58.549514 kubelet[2697]: I0113 21:15:58.549482 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14235e6-6427-4dbc-b1c7-8092fd05e624-kube-api-access-wmqr6" (OuterVolumeSpecName: "kube-api-access-wmqr6") pod "b14235e6-6427-4dbc-b1c7-8092fd05e624" (UID: "b14235e6-6427-4dbc-b1c7-8092fd05e624"). InnerVolumeSpecName "kube-api-access-wmqr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:15:58.550796 kubelet[2697]: I0113 21:15:58.550763 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:15:58.551438 kubelet[2697]: I0113 21:15:58.551414 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" (UID: "b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:15:58.628355 kubelet[2697]: I0113 21:15:58.628275 2697 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k79t2\" (UniqueName: \"kubernetes.io/projected/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-kube-api-access-k79t2\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628355 kubelet[2697]: I0113 21:15:58.628330 2697 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-clustermesh-secrets\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628355 kubelet[2697]: I0113 21:15:58.628351 2697 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b14235e6-6427-4dbc-b1c7-8092fd05e624-cilium-config-path\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628705 kubelet[2697]: I0113 21:15:58.628372 2697 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-xtables-lock\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628705 kubelet[2697]: I0113 21:15:58.628394 2697 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-hubble-tls\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628705 kubelet[2697]: I0113 21:15:58.628411 2697 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-run\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628705 kubelet[2697]: I0113 21:15:58.628429 2697 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cni-path\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628705 kubelet[2697]: I0113 21:15:58.628446 2697 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-cgroup\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628705 kubelet[2697]: I0113 21:15:58.628464 2697 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-host-proc-sys-kernel\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.628705 kubelet[2697]: I0113 21:15:58.628483 2697 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wmqr6\" (UniqueName: \"kubernetes.io/projected/b14235e6-6427-4dbc-b1c7-8092fd05e624-kube-api-access-wmqr6\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.629226 kubelet[2697]: I0113 21:15:58.628501 2697 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-lib-modules\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.629226 kubelet[2697]: I0113 21:15:58.628520 2697 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-etc-cni-netd\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.629226 kubelet[2697]: I0113 21:15:58.628537 2697 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-cilium-config-path\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:58.629226 kubelet[2697]: I0113 21:15:58.628554 2697 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca-bpf-maps\") on node \"ci-4152-2-0-a-cb16eea878.novalocal\" DevicePath \"\"" Jan 13 21:15:59.269475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feecce1447f4aaefb84a9146b5a9a7310df54fd4a28d979f5cb6c9a0025d0fd7-rootfs.mount: Deactivated successfully. Jan 13 21:15:59.270065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8-rootfs.mount: Deactivated successfully. Jan 13 21:15:59.270255 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55bf8b72e532e425ee6ce2ea8d68006326f0a556bcf5c41be34fe49715b5ebf8-shm.mount: Deactivated successfully. Jan 13 21:15:59.270490 systemd[1]: var-lib-kubelet-pods-b14235e6\x2d6427\x2d4dbc\x2db1c7\x2d8092fd05e624-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwmqr6.mount: Deactivated successfully. Jan 13 21:15:59.270667 systemd[1]: var-lib-kubelet-pods-b2ff82b8\x2d1d0c\x2d44d8\x2d85b4\x2d20ccf2ba07ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk79t2.mount: Deactivated successfully. Jan 13 21:15:59.270815 systemd[1]: var-lib-kubelet-pods-b2ff82b8\x2d1d0c\x2d44d8\x2d85b4\x2d20ccf2ba07ca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:15:59.270955 systemd[1]: var-lib-kubelet-pods-b2ff82b8\x2d1d0c\x2d44d8\x2d85b4\x2d20ccf2ba07ca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:15:59.427819 kubelet[2697]: I0113 21:15:59.427296 2697 scope.go:117] "RemoveContainer" containerID="4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067" Jan 13 21:15:59.439235 containerd[1465]: time="2025-01-13T21:15:59.438823655Z" level=info msg="RemoveContainer for \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\"" Jan 13 21:15:59.448393 systemd[1]: Removed slice kubepods-besteffort-podb14235e6_6427_4dbc_b1c7_8092fd05e624.slice - libcontainer container kubepods-besteffort-podb14235e6_6427_4dbc_b1c7_8092fd05e624.slice. Jan 13 21:15:59.467789 systemd[1]: Removed slice kubepods-burstable-podb2ff82b8_1d0c_44d8_85b4_20ccf2ba07ca.slice - libcontainer container kubepods-burstable-podb2ff82b8_1d0c_44d8_85b4_20ccf2ba07ca.slice. Jan 13 21:15:59.468522 systemd[1]: kubepods-burstable-podb2ff82b8_1d0c_44d8_85b4_20ccf2ba07ca.slice: Consumed 8.512s CPU time. Jan 13 21:15:59.498840 containerd[1465]: time="2025-01-13T21:15:59.498736554Z" level=info msg="RemoveContainer for \"4104bf1df5388764b43072c288560b5dc0eee017fae0d59c514490639ea26067\" returns successfully" Jan 13 21:15:59.499497 kubelet[2697]: I0113 21:15:59.499359 2697 scope.go:117] "RemoveContainer" containerID="4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da" Jan 13 21:15:59.501993 containerd[1465]: time="2025-01-13T21:15:59.501897643Z" level=info msg="RemoveContainer for \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\"" Jan 13 21:15:59.528070 containerd[1465]: time="2025-01-13T21:15:59.527253959Z" level=info msg="RemoveContainer for \"4235297e7b6daf1a45ee94d228a526f6ca57934ccc37ace1fc9aa33153f154da\" returns successfully" Jan 13 21:15:59.528290 kubelet[2697]: I0113 21:15:59.527682 2697 scope.go:117] "RemoveContainer" containerID="f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7" Jan 13 21:15:59.533670 containerd[1465]: time="2025-01-13T21:15:59.532681420Z" level=info msg="RemoveContainer for \"f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7\"" Jan 13 21:15:59.599968 containerd[1465]: time="2025-01-13T21:15:59.598419657Z" level=info msg="RemoveContainer for \"f4abc027c5afc75093fa09fa394baecae20e1eed56459efd470d0e8de7685eb7\" returns successfully" Jan 13 21:15:59.600233 kubelet[2697]: I0113 21:15:59.599112 2697 scope.go:117] "RemoveContainer" containerID="e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3" Jan 13 21:15:59.606099 containerd[1465]: time="2025-01-13T21:15:59.605502844Z" level=info msg="RemoveContainer for \"e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3\"" Jan 13 21:15:59.781827 containerd[1465]: time="2025-01-13T21:15:59.781632130Z" level=info msg="RemoveContainer for \"e52bbe9a3e959f2db6f8635d870ec237602f0a8fe8953930b28db656bc70c5c3\" returns successfully" Jan 13 21:15:59.782880 kubelet[2697]: I0113 21:15:59.782614 2697 scope.go:117] "RemoveContainer" containerID="5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c" Jan 13 21:15:59.786193 containerd[1465]: time="2025-01-13T21:15:59.785680455Z" level=info msg="RemoveContainer for \"5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c\"" Jan 13 21:15:59.814137 containerd[1465]: time="2025-01-13T21:15:59.813859395Z" level=info msg="RemoveContainer for \"5c1c1739e2822c10928c9366bdf42433b7bf883a86abae3c5326837c0602f70c\" returns successfully" Jan 13 21:15:59.814895 kubelet[2697]: I0113 21:15:59.814553 2697 scope.go:117] "RemoveContainer" containerID="77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69" Jan 13 21:15:59.818961 containerd[1465]: time="2025-01-13T21:15:59.818863031Z" level=info msg="RemoveContainer for \"77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69\"" Jan 13 21:15:59.834084 containerd[1465]: time="2025-01-13T21:15:59.833982243Z" level=info msg="RemoveContainer for \"77d5e8e22fc21696513fcb2a298e3ff57464c64fd5bb28a94b48cea418d6dc69\" returns successfully" Jan 13 21:15:59.877782 kubelet[2697]: I0113 21:15:59.877307 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14235e6-6427-4dbc-b1c7-8092fd05e624" path="/var/lib/kubelet/pods/b14235e6-6427-4dbc-b1c7-8092fd05e624/volumes" Jan 13 21:15:59.879056 kubelet[2697]: I0113 21:15:59.878975 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" path="/var/lib/kubelet/pods/b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca/volumes" Jan 13 21:16:00.430677 sshd[4259]: Connection closed by 172.24.4.1 port 43752 Jan 13 21:16:00.430393 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Jan 13 21:16:00.444693 systemd[1]: sshd@21-172.24.4.27:22-172.24.4.1:43752.service: Deactivated successfully. Jan 13 21:16:00.451073 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:16:00.451509 systemd[1]: session-24.scope: Consumed 1.139s CPU time. Jan 13 21:16:00.452905 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:16:00.463749 systemd[1]: Started sshd@22-172.24.4.27:22-172.24.4.1:43756.service - OpenSSH per-connection server daemon (172.24.4.1:43756). Jan 13 21:16:00.470211 systemd-logind[1440]: Removed session 24. Jan 13 21:16:01.006516 kubelet[2697]: E0113 21:16:01.006342 2697 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:16:01.844140 sshd[4421]: Accepted publickey for core from 172.24.4.1 port 43756 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:16:01.846726 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:16:01.856346 systemd-logind[1440]: New session 25 of user core. Jan 13 21:16:01.862269 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:16:03.866611 kubelet[2697]: I0113 21:16:03.866497 2697 topology_manager.go:215] "Topology Admit Handler" podUID="a64e8db5-2627-4c44-bb92-5f049ee8d73d" podNamespace="kube-system" podName="cilium-2j8h2" Jan 13 21:16:03.866611 kubelet[2697]: E0113 21:16:03.866578 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" containerName="clean-cilium-state" Jan 13 21:16:03.868026 kubelet[2697]: E0113 21:16:03.866591 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" containerName="mount-cgroup" Jan 13 21:16:03.868026 kubelet[2697]: E0113 21:16:03.867211 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" containerName="apply-sysctl-overwrites" Jan 13 21:16:03.868026 kubelet[2697]: E0113 21:16:03.867220 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" containerName="mount-bpf-fs" Jan 13 21:16:03.868026 kubelet[2697]: E0113 21:16:03.867229 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b14235e6-6427-4dbc-b1c7-8092fd05e624" containerName="cilium-operator" Jan 13 21:16:03.868026 kubelet[2697]: E0113 21:16:03.867236 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" containerName="cilium-agent" Jan 13 21:16:03.868026 kubelet[2697]: I0113 21:16:03.867284 2697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ff82b8-1d0c-44d8-85b4-20ccf2ba07ca" containerName="cilium-agent" Jan 13 21:16:03.868026 kubelet[2697]: I0113 21:16:03.867293 2697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14235e6-6427-4dbc-b1c7-8092fd05e624" containerName="cilium-operator" Jan 13 21:16:03.876688 kubelet[2697]: W0113 21:16:03.876662 2697 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-0-a-cb16eea878.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-a-cb16eea878.novalocal' and this object Jan 13 21:16:03.876978 kubelet[2697]: E0113 21:16:03.876955 2697 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-0-a-cb16eea878.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-a-cb16eea878.novalocal' and this object Jan 13 21:16:03.877255 kubelet[2697]: W0113 21:16:03.877222 2697 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152-2-0-a-cb16eea878.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-a-cb16eea878.novalocal' and this object Jan 13 21:16:03.877255 kubelet[2697]: E0113 21:16:03.877243 2697 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152-2-0-a-cb16eea878.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-a-cb16eea878.novalocal' and this object Jan 13 21:16:03.878060 kubelet[2697]: W0113 21:16:03.877603 2697 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-0-a-cb16eea878.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-a-cb16eea878.novalocal' and this object Jan 13 21:16:03.878060 kubelet[2697]: E0113 21:16:03.877622 2697 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-0-a-cb16eea878.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-a-cb16eea878.novalocal' and this object Jan 13 21:16:03.878525 systemd[1]: Created slice kubepods-burstable-poda64e8db5_2627_4c44_bb92_5f049ee8d73d.slice - libcontainer container kubepods-burstable-poda64e8db5_2627_4c44_bb92_5f049ee8d73d.slice. Jan 13 21:16:03.883029 kubelet[2697]: W0113 21:16:03.882051 2697 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-0-a-cb16eea878.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-a-cb16eea878.novalocal' and this object Jan 13 21:16:03.883156 kubelet[2697]: E0113 21:16:03.883130 2697 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-0-a-cb16eea878.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-a-cb16eea878.novalocal' and this object Jan 13 21:16:03.966476 kubelet[2697]: I0113 21:16:03.966430 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-lib-modules\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.966823 kubelet[2697]: I0113 21:16:03.966798 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-host-proc-sys-net\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.966982 kubelet[2697]: I0113 21:16:03.966960 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-hostproc\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.967215 kubelet[2697]: I0113 21:16:03.967158 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-cilium-cgroup\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.967355 kubelet[2697]: I0113 21:16:03.967336 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-cni-path\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.967954 kubelet[2697]: I0113 21:16:03.967565 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-etc-cni-netd\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.967954 kubelet[2697]: I0113 21:16:03.967600 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a64e8db5-2627-4c44-bb92-5f049ee8d73d-hubble-tls\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.967954 kubelet[2697]: I0113 21:16:03.967632 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-bpf-maps\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.967954 kubelet[2697]: I0113 21:16:03.967658 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-xtables-lock\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.967954 kubelet[2697]: I0113 21:16:03.967703 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a64e8db5-2627-4c44-bb92-5f049ee8d73d-clustermesh-secrets\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.967954 kubelet[2697]: I0113 21:16:03.967733 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswbn\" (UniqueName: \"kubernetes.io/projected/a64e8db5-2627-4c44-bb92-5f049ee8d73d-kube-api-access-fswbn\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.968241 kubelet[2697]: I0113 21:16:03.967762 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-cilium-run\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.968241 kubelet[2697]: I0113 21:16:03.967789 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a64e8db5-2627-4c44-bb92-5f049ee8d73d-cilium-config-path\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.968241 kubelet[2697]: I0113 21:16:03.967812 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a64e8db5-2627-4c44-bb92-5f049ee8d73d-cilium-ipsec-secrets\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:03.968241 kubelet[2697]: I0113 21:16:03.967836 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a64e8db5-2627-4c44-bb92-5f049ee8d73d-host-proc-sys-kernel\") pod \"cilium-2j8h2\" (UID: \"a64e8db5-2627-4c44-bb92-5f049ee8d73d\") " pod="kube-system/cilium-2j8h2" Jan 13 21:16:04.081874 sshd[4423]: Connection closed by 172.24.4.1 port 43756 Jan 13 21:16:04.084344 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Jan 13 21:16:04.096044 systemd[1]: sshd@22-172.24.4.27:22-172.24.4.1:43756.service: Deactivated successfully. Jan 13 21:16:04.098145 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:16:04.098403 systemd[1]: session-25.scope: Consumed 1.520s CPU time. Jan 13 21:16:04.101439 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:16:04.109104 systemd[1]: Started sshd@23-172.24.4.27:22-172.24.4.1:60920.service - OpenSSH per-connection server daemon (172.24.4.1:60920). Jan 13 21:16:04.111215 systemd-logind[1440]: Removed session 25. Jan 13 21:16:05.070074 kubelet[2697]: E0113 21:16:05.069866 2697 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 21:16:05.070074 kubelet[2697]: E0113 21:16:05.069921 2697 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 21:16:05.070074 kubelet[2697]: E0113 21:16:05.069960 2697 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-2j8h2: failed to sync secret cache: timed out waiting for the condition Jan 13 21:16:05.070887 kubelet[2697]: E0113 21:16:05.069880 2697 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 21:16:05.071200 kubelet[2697]: E0113 21:16:05.071074 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a64e8db5-2627-4c44-bb92-5f049ee8d73d-clustermesh-secrets podName:a64e8db5-2627-4c44-bb92-5f049ee8d73d nodeName:}" failed. No retries permitted until 2025-01-13 21:16:05.569986557 +0000 UTC m=+149.811523098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/a64e8db5-2627-4c44-bb92-5f049ee8d73d-clustermesh-secrets") pod "cilium-2j8h2" (UID: "a64e8db5-2627-4c44-bb92-5f049ee8d73d") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:16:05.071200 kubelet[2697]: E0113 21:16:05.071128 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a64e8db5-2627-4c44-bb92-5f049ee8d73d-hubble-tls podName:a64e8db5-2627-4c44-bb92-5f049ee8d73d nodeName:}" failed. No retries permitted until 2025-01-13 21:16:05.571106948 +0000 UTC m=+149.812643489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/a64e8db5-2627-4c44-bb92-5f049ee8d73d-hubble-tls") pod "cilium-2j8h2" (UID: "a64e8db5-2627-4c44-bb92-5f049ee8d73d") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:16:05.071200 kubelet[2697]: E0113 21:16:05.071159 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a64e8db5-2627-4c44-bb92-5f049ee8d73d-cilium-ipsec-secrets podName:a64e8db5-2627-4c44-bb92-5f049ee8d73d nodeName:}" failed. No retries permitted until 2025-01-13 21:16:05.571141814 +0000 UTC m=+149.812678355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/a64e8db5-2627-4c44-bb92-5f049ee8d73d-cilium-ipsec-secrets") pod "cilium-2j8h2" (UID: "a64e8db5-2627-4c44-bb92-5f049ee8d73d") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:16:05.350902 sshd[4434]: Accepted publickey for core from 172.24.4.1 port 60920 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:16:05.353629 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:16:05.363157 systemd-logind[1440]: New session 26 of user core. Jan 13 21:16:05.368252 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:16:05.703986 containerd[1465]: time="2025-01-13T21:16:05.703221545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2j8h2,Uid:a64e8db5-2627-4c44-bb92-5f049ee8d73d,Namespace:kube-system,Attempt:0,}" Jan 13 21:16:05.760439 containerd[1465]: time="2025-01-13T21:16:05.760251967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:16:05.760961 containerd[1465]: time="2025-01-13T21:16:05.760881538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:16:05.761270 containerd[1465]: time="2025-01-13T21:16:05.760972528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:05.761270 containerd[1465]: time="2025-01-13T21:16:05.761212328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:16:05.803235 systemd[1]: Started cri-containerd-054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32.scope - libcontainer container 054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32. Jan 13 21:16:05.826109 containerd[1465]: time="2025-01-13T21:16:05.825986215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2j8h2,Uid:a64e8db5-2627-4c44-bb92-5f049ee8d73d,Namespace:kube-system,Attempt:0,} returns sandbox id \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\"" Jan 13 21:16:05.829847 containerd[1465]: time="2025-01-13T21:16:05.829807022Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:16:05.845720 containerd[1465]: time="2025-01-13T21:16:05.845678013Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"01f342e373bd8a0a149339cbc277b082a4e03e660568d8779feef8ed92fd0890\"" Jan 13 21:16:05.847141 containerd[1465]: time="2025-01-13T21:16:05.846239667Z" level=info msg="StartContainer for \"01f342e373bd8a0a149339cbc277b082a4e03e660568d8779feef8ed92fd0890\"" Jan 13 21:16:05.872334 systemd[1]: Started cri-containerd-01f342e373bd8a0a149339cbc277b082a4e03e660568d8779feef8ed92fd0890.scope - libcontainer container 01f342e373bd8a0a149339cbc277b082a4e03e660568d8779feef8ed92fd0890. Jan 13 21:16:05.898193 containerd[1465]: time="2025-01-13T21:16:05.898149453Z" level=info msg="StartContainer for \"01f342e373bd8a0a149339cbc277b082a4e03e660568d8779feef8ed92fd0890\" returns successfully" Jan 13 21:16:05.905434 systemd[1]: cri-containerd-01f342e373bd8a0a149339cbc277b082a4e03e660568d8779feef8ed92fd0890.scope: Deactivated successfully. Jan 13 21:16:05.922848 sshd[4436]: Connection closed by 172.24.4.1 port 60920 Jan 13 21:16:05.921981 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Jan 13 21:16:05.933773 systemd[1]: sshd@23-172.24.4.27:22-172.24.4.1:60920.service: Deactivated successfully. Jan 13 21:16:05.935504 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:16:05.936308 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:16:05.939115 systemd[1]: Started sshd@24-172.24.4.27:22-172.24.4.1:60934.service - OpenSSH per-connection server daemon (172.24.4.1:60934). Jan 13 21:16:05.941551 systemd-logind[1440]: Removed session 26. Jan 13 21:16:05.973450 containerd[1465]: time="2025-01-13T21:16:05.972972921Z" level=info msg="shim disconnected" id=01f342e373bd8a0a149339cbc277b082a4e03e660568d8779feef8ed92fd0890 namespace=k8s.io Jan 13 21:16:05.973450 containerd[1465]: time="2025-01-13T21:16:05.973059694Z" level=warning msg="cleaning up after shim disconnected" id=01f342e373bd8a0a149339cbc277b082a4e03e660568d8779feef8ed92fd0890 namespace=k8s.io Jan 13 21:16:05.973450 containerd[1465]: time="2025-01-13T21:16:05.973069863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:16:06.007922 kubelet[2697]: E0113 21:16:06.007806 2697 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:16:06.499742 containerd[1465]: time="2025-01-13T21:16:06.499423986Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:16:06.535786 containerd[1465]: time="2025-01-13T21:16:06.535340270Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df\"" Jan 13 21:16:06.536978 containerd[1465]: time="2025-01-13T21:16:06.536888776Z" level=info msg="StartContainer for \"cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df\"" Jan 13 21:16:06.585171 systemd[1]: Started cri-containerd-cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df.scope - libcontainer container cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df. Jan 13 21:16:06.625239 containerd[1465]: time="2025-01-13T21:16:06.625161051Z" level=info msg="StartContainer for \"cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df\" returns successfully" Jan 13 21:16:06.627236 systemd[1]: cri-containerd-cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df.scope: Deactivated successfully. Jan 13 21:16:06.648718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df-rootfs.mount: Deactivated successfully. Jan 13 21:16:06.654774 containerd[1465]: time="2025-01-13T21:16:06.654707126Z" level=info msg="shim disconnected" id=cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df namespace=k8s.io Jan 13 21:16:06.654993 containerd[1465]: time="2025-01-13T21:16:06.654951214Z" level=warning msg="cleaning up after shim disconnected" id=cef6981ab3428ea8cac73102400b0aa2ed51fe05ea87ac0c3bdbb6eb4d7157df namespace=k8s.io Jan 13 21:16:06.655105 containerd[1465]: time="2025-01-13T21:16:06.655088191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:16:07.130737 sshd[4532]: Accepted publickey for core from 172.24.4.1 port 60934 ssh2: RSA SHA256:hKoBQog9Ix5IHISTCXtmi9gjd0Uf3sTbMtknE4KXwvU Jan 13 21:16:07.130607 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:16:07.148122 systemd-logind[1440]: New session 27 of user core. Jan 13 21:16:07.154739 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:16:07.502911 containerd[1465]: time="2025-01-13T21:16:07.502732588Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:16:07.566383 containerd[1465]: time="2025-01-13T21:16:07.564050461Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c\"" Jan 13 21:16:07.567080 containerd[1465]: time="2025-01-13T21:16:07.566936134Z" level=info msg="StartContainer for \"a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c\"" Jan 13 21:16:07.638527 systemd[1]: Started cri-containerd-a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c.scope - libcontainer container a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c. Jan 13 21:16:07.684150 containerd[1465]: time="2025-01-13T21:16:07.683447743Z" level=info msg="StartContainer for \"a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c\" returns successfully" Jan 13 21:16:07.685792 systemd[1]: cri-containerd-a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c.scope: Deactivated successfully. Jan 13 21:16:07.710942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c-rootfs.mount: Deactivated successfully. Jan 13 21:16:07.721678 containerd[1465]: time="2025-01-13T21:16:07.721602416Z" level=info msg="shim disconnected" id=a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c namespace=k8s.io Jan 13 21:16:07.721678 containerd[1465]: time="2025-01-13T21:16:07.721658401Z" level=warning msg="cleaning up after shim disconnected" id=a25a324013697710df9cb5a3304e205722f0291a0a3a619965cfd0a9a6ae996c namespace=k8s.io Jan 13 21:16:07.721678 containerd[1465]: time="2025-01-13T21:16:07.721668851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:16:08.514544 containerd[1465]: time="2025-01-13T21:16:08.514451693Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:16:08.546051 containerd[1465]: time="2025-01-13T21:16:08.544919266Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f\"" Jan 13 21:16:08.547883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455369259.mount: Deactivated successfully. Jan 13 21:16:08.552078 containerd[1465]: time="2025-01-13T21:16:08.550707243Z" level=info msg="StartContainer for \"9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f\"" Jan 13 21:16:08.598276 systemd[1]: Started cri-containerd-9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f.scope - libcontainer container 9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f. Jan 13 21:16:08.635165 systemd[1]: cri-containerd-9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f.scope: Deactivated successfully. Jan 13 21:16:08.639786 containerd[1465]: time="2025-01-13T21:16:08.639679231Z" level=info msg="StartContainer for \"9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f\" returns successfully" Jan 13 21:16:08.663738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f-rootfs.mount: Deactivated successfully. Jan 13 21:16:08.667950 containerd[1465]: time="2025-01-13T21:16:08.667744978Z" level=info msg="shim disconnected" id=9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f namespace=k8s.io Jan 13 21:16:08.667950 containerd[1465]: time="2025-01-13T21:16:08.667806454Z" level=warning msg="cleaning up after shim disconnected" id=9b5dcf0b0f107a926b3b1dfb06699c08ba6d47120d01c61f8efd5cdcea05c40f namespace=k8s.io Jan 13 21:16:08.667950 containerd[1465]: time="2025-01-13T21:16:08.667818166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:16:09.225142 kubelet[2697]: I0113 21:16:09.222153 2697 setters.go:580] "Node became not ready" node="ci-4152-2-0-a-cb16eea878.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:16:09Z","lastTransitionTime":"2025-01-13T21:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:16:09.515367 containerd[1465]: time="2025-01-13T21:16:09.515118975Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:16:09.551753 containerd[1465]: time="2025-01-13T21:16:09.551686781Z" level=info msg="CreateContainer within sandbox \"054e9e2bfad819d8a8e46aca9d78cf1855f353c421628cb027b8cff240fbbc32\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0788d13762cd5002974eeece4c94606c328b1003ff928731031f8958fb4bc01d\"" Jan 13 21:16:09.554547 containerd[1465]: time="2025-01-13T21:16:09.554482305Z" level=info msg="StartContainer for \"0788d13762cd5002974eeece4c94606c328b1003ff928731031f8958fb4bc01d\"" Jan 13 21:16:09.595271 systemd[1]: Started cri-containerd-0788d13762cd5002974eeece4c94606c328b1003ff928731031f8958fb4bc01d.scope - libcontainer container 0788d13762cd5002974eeece4c94606c328b1003ff928731031f8958fb4bc01d. Jan 13 21:16:09.647772 containerd[1465]: time="2025-01-13T21:16:09.647644362Z" level=info msg="StartContainer for \"0788d13762cd5002974eeece4c94606c328b1003ff928731031f8958fb4bc01d\" returns successfully" Jan 13 21:16:10.063044 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:16:10.127067 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 13 21:16:10.572462 kubelet[2697]: I0113 21:16:10.572352 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2j8h2" podStartSLOduration=7.572314983 podStartE2EDuration="7.572314983s" podCreationTimestamp="2025-01-13 21:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:16:10.571988751 +0000 UTC m=+154.813525342" watchObservedRunningTime="2025-01-13 21:16:10.572314983 +0000 UTC m=+154.813851524" Jan 13 21:16:13.403350 systemd-networkd[1378]: lxc_health: Link UP Jan 13 21:16:13.414283 systemd-networkd[1378]: lxc_health: Gained carrier Jan 13 21:16:14.629198 systemd-networkd[1378]: lxc_health: Gained IPv6LL Jan 13 21:16:16.427394 systemd[1]: run-containerd-runc-k8s.io-0788d13762cd5002974eeece4c94606c328b1003ff928731031f8958fb4bc01d-runc.FroTAW.mount: Deactivated successfully. Jan 13 21:16:18.637588 systemd[1]: run-containerd-runc-k8s.io-0788d13762cd5002974eeece4c94606c328b1003ff928731031f8958fb4bc01d-runc.OSHKya.mount: Deactivated successfully. Jan 13 21:16:18.914454 sshd[4607]: Connection closed by 172.24.4.1 port 60934 Jan 13 21:16:18.916093 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Jan 13 21:16:18.923558 systemd[1]: sshd@24-172.24.4.27:22-172.24.4.1:60934.service: Deactivated successfully. Jan 13 21:16:18.928234 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:16:18.930920 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:16:18.934154 systemd-logind[1440]: Removed session 27.