Jan 13 20:31:13.011406 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:31:13.011468 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:31:13.011494 kernel: BIOS-provided physical RAM map: Jan 13 20:31:13.011514 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:31:13.011532 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:31:13.011556 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:31:13.011669 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 20:31:13.011690 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 20:31:13.011727 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:31:13.011748 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:31:13.011767 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 20:31:13.011787 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:31:13.011807 kernel: NX (Execute Disable) protection: active Jan 13 20:31:13.011827 kernel: APIC: Static calls initialized Jan 13 20:31:13.011857 kernel: SMBIOS 3.0.0 present. Jan 13 20:31:13.011877 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 20:31:13.011898 kernel: Hypervisor detected: KVM Jan 13 20:31:13.011918 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:31:13.011938 kernel: kvm-clock: using sched offset of 3483399441 cycles Jan 13 20:31:13.011964 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:31:13.011985 kernel: tsc: Detected 1996.249 MHz processor Jan 13 20:31:13.012007 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:31:13.012029 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:31:13.012051 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 20:31:13.012072 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:31:13.012094 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:31:13.012115 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 20:31:13.012136 kernel: ACPI: Early table checksum verification disabled Jan 13 20:31:13.012161 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 20:31:13.012183 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:13.012204 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:13.012225 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:13.012246 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 20:31:13.012267 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:13.012288 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:13.012309 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 20:31:13.012330 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 20:31:13.012355 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 20:31:13.012376 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 20:31:13.012397 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 20:31:13.012425 kernel: No NUMA configuration found Jan 13 20:31:13.012447 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 20:31:13.012501 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 20:31:13.012525 kernel: Zone ranges: Jan 13 20:31:13.012552 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:31:13.012682 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:31:13.012707 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:31:13.012728 kernel: Movable zone start for each node Jan 13 20:31:13.012750 kernel: Early memory node ranges Jan 13 20:31:13.012774 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:31:13.012831 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 20:31:13.012859 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 20:31:13.012890 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 20:31:13.012912 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:31:13.012934 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:31:13.012956 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 20:31:13.012978 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:31:13.013000 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:31:13.013022 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:31:13.013044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:31:13.013065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:31:13.013092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:31:13.013114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:31:13.013136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:31:13.013157 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:31:13.013179 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:31:13.013201 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:31:13.013223 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 20:31:13.013244 kernel: Booting paravirtualized kernel on KVM Jan 13 20:31:13.013267 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:31:13.013293 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:31:13.013315 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:31:13.013337 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:31:13.013359 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:31:13.013380 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 20:31:13.013406 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:31:13.013429 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:31:13.013451 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:31:13.013478 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:31:13.013500 kernel: Fallback order for Node 0: 0 Jan 13 20:31:13.013522 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 20:31:13.013543 kernel: Policy zone: Normal Jan 13 20:31:13.013596 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:31:13.013618 kernel: software IO TLB: area num 2. Jan 13 20:31:13.013641 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 229356K reserved, 0K cma-reserved) Jan 13 20:31:13.013663 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:31:13.013690 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:31:13.013712 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:31:13.013734 kernel: Dynamic Preempt: voluntary Jan 13 20:31:13.013755 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:31:13.013779 kernel: rcu: RCU event tracing is enabled. Jan 13 20:31:13.013802 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:31:13.013824 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:31:13.013869 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:31:13.013892 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:31:13.013914 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:31:13.013941 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:31:13.013962 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:31:13.013984 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:31:13.014006 kernel: Console: colour VGA+ 80x25 Jan 13 20:31:13.014028 kernel: printk: console [tty0] enabled Jan 13 20:31:13.014050 kernel: printk: console [ttyS0] enabled Jan 13 20:31:13.014072 kernel: ACPI: Core revision 20230628 Jan 13 20:31:13.014094 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:31:13.014115 kernel: x2apic enabled Jan 13 20:31:13.016647 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:31:13.016679 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:31:13.016702 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:31:13.016725 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 20:31:13.016748 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 20:31:13.016770 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 20:31:13.016793 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:31:13.016815 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:31:13.016837 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:31:13.016866 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:31:13.016887 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:31:13.016910 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 20:31:13.016932 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:31:13.016969 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:31:13.016995 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:31:13.017018 kernel: landlock: Up and running. Jan 13 20:31:13.017041 kernel: SELinux: Initializing. Jan 13 20:31:13.017065 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:31:13.017088 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:31:13.017111 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 20:31:13.017135 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:31:13.017163 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:31:13.017187 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:31:13.017210 kernel: Performance Events: AMD PMU driver. Jan 13 20:31:13.017233 kernel: ... version: 0 Jan 13 20:31:13.017260 kernel: ... bit width: 48 Jan 13 20:31:13.017283 kernel: ... generic registers: 4 Jan 13 20:31:13.017306 kernel: ... value mask: 0000ffffffffffff Jan 13 20:31:13.017330 kernel: ... max period: 00007fffffffffff Jan 13 20:31:13.017353 kernel: ... fixed-purpose events: 0 Jan 13 20:31:13.017376 kernel: ... event mask: 000000000000000f Jan 13 20:31:13.017398 kernel: signal: max sigframe size: 1440 Jan 13 20:31:13.017421 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:31:13.017446 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:31:13.017469 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:31:13.017496 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:31:13.017520 kernel: .... node #0, CPUs: #1 Jan 13 20:31:13.017543 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:31:13.017596 kernel: smpboot: Max logical packages: 2 Jan 13 20:31:13.017621 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 20:31:13.017644 kernel: devtmpfs: initialized Jan 13 20:31:13.017667 kernel: x86/mm: Memory block size: 128MB Jan 13 20:31:13.017690 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:31:13.017714 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:31:13.017742 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:31:13.017765 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:31:13.017789 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:31:13.017812 kernel: audit: type=2000 audit(1736800271.603:1): state=initialized audit_enabled=0 res=1 Jan 13 20:31:13.017835 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:31:13.017936 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:31:13.017959 kernel: cpuidle: using governor menu Jan 13 20:31:13.017982 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:31:13.018005 kernel: dca service started, version 1.12.1 Jan 13 20:31:13.018034 kernel: PCI: Using configuration type 1 for base access Jan 13 20:31:13.018057 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:31:13.018080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:31:13.018104 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:31:13.018126 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:31:13.018149 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:31:13.018173 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:31:13.018196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:31:13.018218 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:31:13.018245 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:31:13.018268 kernel: ACPI: Interpreter enabled Jan 13 20:31:13.018291 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:31:13.018314 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:31:13.018337 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:31:13.018361 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:31:13.018384 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 20:31:13.018407 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:31:13.019985 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:31:13.020188 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:31:13.020357 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:31:13.020384 kernel: acpiphp: Slot [3] registered Jan 13 20:31:13.020402 kernel: acpiphp: Slot [4] registered Jan 13 20:31:13.020420 kernel: acpiphp: Slot [5] registered Jan 13 20:31:13.020437 kernel: acpiphp: Slot [6] registered Jan 13 20:31:13.020454 kernel: acpiphp: Slot [7] registered Jan 13 20:31:13.020479 kernel: acpiphp: Slot [8] registered Jan 13 20:31:13.020496 kernel: acpiphp: Slot [9] registered Jan 13 20:31:13.020513 kernel: acpiphp: Slot [10] registered Jan 13 20:31:13.020531 kernel: acpiphp: Slot [11] registered Jan 13 20:31:13.020549 kernel: acpiphp: Slot [12] registered Jan 13 20:31:13.021645 kernel: acpiphp: Slot [13] registered Jan 13 20:31:13.021665 kernel: acpiphp: Slot [14] registered Jan 13 20:31:13.021683 kernel: acpiphp: Slot [15] registered Jan 13 20:31:13.021700 kernel: acpiphp: Slot [16] registered Jan 13 20:31:13.021717 kernel: acpiphp: Slot [17] registered Jan 13 20:31:13.021740 kernel: acpiphp: Slot [18] registered Jan 13 20:31:13.021757 kernel: acpiphp: Slot [19] registered Jan 13 20:31:13.021775 kernel: acpiphp: Slot [20] registered Jan 13 20:31:13.021792 kernel: acpiphp: Slot [21] registered Jan 13 20:31:13.021809 kernel: acpiphp: Slot [22] registered Jan 13 20:31:13.021827 kernel: acpiphp: Slot [23] registered Jan 13 20:31:13.021865 kernel: acpiphp: Slot [24] registered Jan 13 20:31:13.021883 kernel: acpiphp: Slot [25] registered Jan 13 20:31:13.021900 kernel: acpiphp: Slot [26] registered Jan 13 20:31:13.021921 kernel: acpiphp: Slot [27] registered Jan 13 20:31:13.021938 kernel: acpiphp: Slot [28] registered Jan 13 20:31:13.021955 kernel: acpiphp: Slot [29] registered Jan 13 20:31:13.021972 kernel: acpiphp: Slot [30] registered Jan 13 20:31:13.021989 kernel: acpiphp: Slot [31] registered Jan 13 20:31:13.022007 kernel: PCI host bridge to bus 0000:00 Jan 13 20:31:13.022194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:31:13.022354 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:31:13.022515 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:31:13.024351 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:31:13.024436 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 20:31:13.024517 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:31:13.026671 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:31:13.026792 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:31:13.026902 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 20:31:13.027009 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 20:31:13.027108 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 20:31:13.027206 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 20:31:13.027304 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 20:31:13.027401 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 20:31:13.027498 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:31:13.028637 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 20:31:13.028735 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 20:31:13.028834 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 20:31:13.028926 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 20:31:13.029016 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 20:31:13.029107 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 20:31:13.029196 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 20:31:13.029292 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:31:13.029392 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:31:13.029483 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 20:31:13.029590 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 20:31:13.029685 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 20:31:13.029774 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 20:31:13.029884 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:31:13.029982 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:31:13.030072 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 20:31:13.030168 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 20:31:13.030272 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 20:31:13.032785 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 20:31:13.032891 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 20:31:13.032998 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:31:13.033103 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 20:31:13.033201 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 20:31:13.033300 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 20:31:13.033317 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:31:13.033328 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:31:13.033338 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:31:13.033348 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:31:13.033358 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:31:13.033373 kernel: iommu: Default domain type: Translated Jan 13 20:31:13.033385 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:31:13.033394 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:31:13.033403 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:31:13.033413 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:31:13.033422 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 20:31:13.033512 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 20:31:13.033621 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 20:31:13.033718 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:31:13.033732 kernel: vgaarb: loaded Jan 13 20:31:13.033741 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:31:13.033751 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:31:13.033760 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:31:13.033770 kernel: pnp: PnP ACPI init Jan 13 20:31:13.033885 kernel: pnp 00:03: [dma 2] Jan 13 20:31:13.033901 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:31:13.033911 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:31:13.033925 kernel: NET: Registered PF_INET protocol family Jan 13 20:31:13.033934 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:31:13.033944 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:31:13.033953 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:31:13.033963 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:31:13.033972 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:31:13.033982 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:31:13.033992 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:31:13.034001 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:31:13.034013 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:31:13.034022 kernel: NET: Registered PF_XDP protocol family Jan 13 20:31:13.034105 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:31:13.034193 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:31:13.034279 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:31:13.036589 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 20:31:13.036681 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 20:31:13.036776 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 20:31:13.036885 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:31:13.036900 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:31:13.036910 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:31:13.036920 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 20:31:13.036929 kernel: Initialise system trusted keyrings Jan 13 20:31:13.036939 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:31:13.036948 kernel: Key type asymmetric registered Jan 13 20:31:13.036958 kernel: Asymmetric key parser 'x509' registered Jan 13 20:31:13.036971 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:31:13.036980 kernel: io scheduler mq-deadline registered Jan 13 20:31:13.036989 kernel: io scheduler kyber registered Jan 13 20:31:13.036999 kernel: io scheduler bfq registered Jan 13 20:31:13.037008 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:31:13.037018 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 20:31:13.037028 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:31:13.037038 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:31:13.037047 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:31:13.037057 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:31:13.037068 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:31:13.037078 kernel: random: crng init done Jan 13 20:31:13.037087 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:31:13.037096 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:31:13.037106 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:31:13.037196 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:31:13.037212 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:31:13.037291 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:31:13.037401 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:31:12 UTC (1736800272) Jan 13 20:31:13.037491 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:31:13.037507 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:31:13.037518 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:31:13.037529 kernel: Segment Routing with IPv6 Jan 13 20:31:13.037539 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:31:13.037550 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:31:13.038593 kernel: Key type dns_resolver registered Jan 13 20:31:13.038611 kernel: IPI shorthand broadcast: enabled Jan 13 20:31:13.038622 kernel: sched_clock: Marking stable (970015783, 171283305)->(1172510453, -31211365) Jan 13 20:31:13.038632 kernel: registered taskstats version 1 Jan 13 20:31:13.038642 kernel: Loading compiled-in X.509 certificates Jan 13 20:31:13.038653 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:31:13.038663 kernel: Key type .fscrypt registered Jan 13 20:31:13.038673 kernel: Key type fscrypt-provisioning registered Jan 13 20:31:13.038683 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:31:13.038694 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:31:13.038706 kernel: ima: No architecture policies found Jan 13 20:31:13.038716 kernel: clk: Disabling unused clocks Jan 13 20:31:13.038726 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:31:13.038736 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:31:13.038747 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:31:13.038756 kernel: Run /init as init process Jan 13 20:31:13.038766 kernel: with arguments: Jan 13 20:31:13.038776 kernel: /init Jan 13 20:31:13.038786 kernel: with environment: Jan 13 20:31:13.038799 kernel: HOME=/ Jan 13 20:31:13.038808 kernel: TERM=linux Jan 13 20:31:13.038818 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:31:13.038831 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:31:13.038844 systemd[1]: Detected virtualization kvm. Jan 13 20:31:13.038856 systemd[1]: Detected architecture x86-64. Jan 13 20:31:13.038867 systemd[1]: Running in initrd. Jan 13 20:31:13.038881 systemd[1]: No hostname configured, using default hostname. Jan 13 20:31:13.038891 systemd[1]: Hostname set to . Jan 13 20:31:13.038903 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:31:13.038913 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:31:13.038924 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:13.038935 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:13.038947 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:31:13.038967 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:31:13.038981 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:31:13.038992 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:31:13.039005 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:31:13.039017 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:31:13.039028 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:13.039041 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:13.039052 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:31:13.039063 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:31:13.039074 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:31:13.039085 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:31:13.039097 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:31:13.039108 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:31:13.039161 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:31:13.039177 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:31:13.039188 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:13.039201 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:13.039212 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:13.039223 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:31:13.039234 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:31:13.039245 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:31:13.039256 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:31:13.039267 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:31:13.039280 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:31:13.039292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:31:13.039303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:13.039314 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:31:13.039347 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 20:31:13.039377 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:13.039389 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:31:13.039405 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:31:13.039417 kernel: Bridge firewalling registered Jan 13 20:31:13.039428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:31:13.039439 systemd-journald[184]: Journal started Jan 13 20:31:13.039463 systemd-journald[184]: Runtime Journal (/run/log/journal/b1b7cb03d5794b42bcaedff19f05f309) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:31:12.998443 systemd-modules-load[186]: Inserted module 'overlay' Jan 13 20:31:13.036475 systemd-modules-load[186]: Inserted module 'br_netfilter' Jan 13 20:31:13.084582 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:31:13.085716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:13.086985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:13.088529 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:31:13.103792 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:13.107284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:31:13.115834 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:31:13.126830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:31:13.132629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:13.138647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:13.141163 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:13.151873 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:31:13.153224 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:13.158696 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:31:13.176316 dracut-cmdline[218]: dracut-dracut-053 Jan 13 20:31:13.180578 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:31:13.195111 systemd-resolved[220]: Positive Trust Anchors: Jan 13 20:31:13.195946 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:31:13.195991 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:31:13.199453 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 13 20:31:13.200340 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:31:13.200972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:13.256633 kernel: SCSI subsystem initialized Jan 13 20:31:13.267618 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:31:13.279903 kernel: iscsi: registered transport (tcp) Jan 13 20:31:13.301673 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:31:13.301739 kernel: QLogic iSCSI HBA Driver Jan 13 20:31:13.357453 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:31:13.366841 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:31:13.417221 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:31:13.417322 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:31:13.419667 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:31:13.465615 kernel: raid6: sse2x4 gen() 13141 MB/s Jan 13 20:31:13.483612 kernel: raid6: sse2x2 gen() 15169 MB/s Jan 13 20:31:13.501928 kernel: raid6: sse2x1 gen() 10095 MB/s Jan 13 20:31:13.501991 kernel: raid6: using algorithm sse2x2 gen() 15169 MB/s Jan 13 20:31:13.521002 kernel: raid6: .... xor() 9352 MB/s, rmw enabled Jan 13 20:31:13.521066 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 20:31:13.543967 kernel: xor: measuring software checksum speed Jan 13 20:31:13.544039 kernel: prefetch64-sse : 17080 MB/sec Jan 13 20:31:13.545246 kernel: generic_sse : 16838 MB/sec Jan 13 20:31:13.545294 kernel: xor: using function: prefetch64-sse (17080 MB/sec) Jan 13 20:31:13.721620 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:31:13.737798 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:31:13.743682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:13.758495 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 13 20:31:13.762745 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:13.772820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:31:13.792454 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 13 20:31:13.837895 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:31:13.845761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:31:13.943808 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:13.956075 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:31:13.990263 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:31:13.995958 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:31:14.000163 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:14.003906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:31:14.013880 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:31:14.037143 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:31:14.048588 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 20:31:14.074511 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 20:31:14.074657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:31:14.074672 kernel: GPT:17805311 != 20971519 Jan 13 20:31:14.074685 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:31:14.074697 kernel: GPT:17805311 != 20971519 Jan 13 20:31:14.074709 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:31:14.074728 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:31:14.069627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:31:14.069686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:14.070321 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:14.070835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:31:14.070878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:14.071364 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:14.082214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:14.086589 kernel: libata version 3.00 loaded. Jan 13 20:31:14.105328 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 20:31:14.118359 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (451) Jan 13 20:31:14.118376 kernel: scsi host0: ata_piix Jan 13 20:31:14.118501 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (472) Jan 13 20:31:14.118515 kernel: scsi host1: ata_piix Jan 13 20:31:14.118660 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 20:31:14.118674 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 20:31:14.135249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:31:14.156131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:14.162717 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:31:14.168442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:31:14.173063 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:31:14.173623 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:31:14.180692 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:31:14.183204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:14.193880 disk-uuid[507]: Primary Header is updated. Jan 13 20:31:14.193880 disk-uuid[507]: Secondary Entries is updated. Jan 13 20:31:14.193880 disk-uuid[507]: Secondary Header is updated. Jan 13 20:31:14.203615 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:31:14.203870 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:15.219771 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:31:15.222196 disk-uuid[511]: The operation has completed successfully. Jan 13 20:31:15.299350 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:31:15.299519 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:31:15.325678 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:31:15.348807 sh[529]: Success Jan 13 20:31:15.378586 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 20:31:15.461518 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:31:15.484798 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:31:15.487901 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:31:15.533807 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:31:15.533913 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:31:15.538585 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:31:15.543419 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:31:15.547121 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:31:15.566435 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:31:15.567387 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:31:15.577688 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:31:15.579889 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:31:15.589589 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:15.589634 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:31:15.592895 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:31:15.597583 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:31:15.606058 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:31:15.608818 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:15.620937 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:31:15.625753 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:31:15.655936 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:31:15.663735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:31:15.684391 systemd-networkd[712]: lo: Link UP Jan 13 20:31:15.684403 systemd-networkd[712]: lo: Gained carrier Jan 13 20:31:15.685516 systemd-networkd[712]: Enumeration completed Jan 13 20:31:15.685619 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:31:15.686601 systemd[1]: Reached target network.target - Network. Jan 13 20:31:15.687635 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:15.687639 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:31:15.689269 systemd-networkd[712]: eth0: Link UP Jan 13 20:31:15.689272 systemd-networkd[712]: eth0: Gained carrier Jan 13 20:31:15.689279 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:15.700623 systemd-networkd[712]: eth0: DHCPv4 address 172.24.4.206/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:31:15.780238 ignition[658]: Ignition 2.20.0 Jan 13 20:31:15.780250 ignition[658]: Stage: fetch-offline Jan 13 20:31:15.781585 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:31:15.780283 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:15.784151 systemd-resolved[220]: Detected conflict on linux IN A 172.24.4.206 Jan 13 20:31:15.780292 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:15.784165 systemd-resolved[220]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jan 13 20:31:15.780378 ignition[658]: parsed url from cmdline: "" Jan 13 20:31:15.788702 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:31:15.780381 ignition[658]: no config URL provided Jan 13 20:31:15.780386 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:31:15.780394 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:31:15.780398 ignition[658]: failed to fetch config: resource requires networking Jan 13 20:31:15.780709 ignition[658]: Ignition finished successfully Jan 13 20:31:15.801602 ignition[722]: Ignition 2.20.0 Jan 13 20:31:15.801613 ignition[722]: Stage: fetch Jan 13 20:31:15.801798 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:15.801815 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:15.801916 ignition[722]: parsed url from cmdline: "" Jan 13 20:31:15.801920 ignition[722]: no config URL provided Jan 13 20:31:15.801925 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:31:15.801934 ignition[722]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:31:15.802078 ignition[722]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 20:31:15.802106 ignition[722]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 20:31:15.802119 ignition[722]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 20:31:16.181251 ignition[722]: GET result: OK Jan 13 20:31:16.181472 ignition[722]: parsing config with SHA512: e4ef7ff9d93b11cd5916709e42260c8ac02abb6a960f99d23eb7f07cf778fbdec549222e940f7721439418f618985d8ddd00eefd240bdebb335f4a738fbeecd9 Jan 13 20:31:16.192598 unknown[722]: fetched base config from "system" Jan 13 20:31:16.192637 unknown[722]: fetched base config from "system" Jan 13 20:31:16.193880 ignition[722]: fetch: fetch complete Jan 13 20:31:16.192651 unknown[722]: fetched user config from "openstack" Jan 13 20:31:16.193894 ignition[722]: fetch: fetch passed Jan 13 20:31:16.193992 ignition[722]: Ignition finished successfully Jan 13 20:31:16.198461 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:31:16.205818 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:31:16.246495 ignition[729]: Ignition 2.20.0 Jan 13 20:31:16.246522 ignition[729]: Stage: kargs Jan 13 20:31:16.246976 ignition[729]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:16.247003 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:16.252090 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:31:16.249529 ignition[729]: kargs: kargs passed Jan 13 20:31:16.249661 ignition[729]: Ignition finished successfully Jan 13 20:31:16.262863 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:31:16.295837 ignition[736]: Ignition 2.20.0 Jan 13 20:31:16.295864 ignition[736]: Stage: disks Jan 13 20:31:16.296246 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:16.296272 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:16.300351 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:31:16.298632 ignition[736]: disks: disks passed Jan 13 20:31:16.302993 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:31:16.298727 ignition[736]: Ignition finished successfully Jan 13 20:31:16.305360 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:31:16.307950 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:31:16.310224 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:31:16.313047 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:31:16.321913 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:31:16.356124 systemd-fsck[745]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:31:16.371475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:31:16.381793 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:31:16.528602 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:31:16.528845 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:31:16.530330 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:31:16.542658 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:31:16.544641 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:31:16.545369 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:31:16.553717 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 20:31:16.555965 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:31:16.555997 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:31:16.568390 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (753) Jan 13 20:31:16.568413 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:16.559589 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:31:16.578219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:31:16.578239 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:31:16.584577 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:31:16.585721 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:31:16.589319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:31:16.718117 initrd-setup-root[782]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:31:16.726147 initrd-setup-root[789]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:31:16.732372 initrd-setup-root[796]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:31:16.739789 initrd-setup-root[803]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:31:16.842896 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:31:16.850661 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:31:16.854323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:31:16.860762 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:31:16.863585 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:16.885284 ignition[870]: INFO : Ignition 2.20.0 Jan 13 20:31:16.885284 ignition[870]: INFO : Stage: mount Jan 13 20:31:16.886516 ignition[870]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:16.886516 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:16.886516 ignition[870]: INFO : mount: mount passed Jan 13 20:31:16.890119 ignition[870]: INFO : Ignition finished successfully Jan 13 20:31:16.887629 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:31:16.892131 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:31:17.060793 systemd-networkd[712]: eth0: Gained IPv6LL Jan 13 20:31:23.806088 coreos-metadata[755]: Jan 13 20:31:23.805 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:31:23.847237 coreos-metadata[755]: Jan 13 20:31:23.847 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:31:23.863896 coreos-metadata[755]: Jan 13 20:31:23.863 INFO Fetch successful Jan 13 20:31:23.863896 coreos-metadata[755]: Jan 13 20:31:23.863 INFO wrote hostname ci-4186-1-0-8-e51fb1a5ac.novalocal to /sysroot/etc/hostname Jan 13 20:31:23.867201 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 20:31:23.867412 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 20:31:23.879748 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:31:23.914020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:31:23.939657 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (888) Jan 13 20:31:23.948786 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:31:23.948857 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:31:23.952927 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:31:23.964684 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:31:23.969219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:31:24.010675 ignition[906]: INFO : Ignition 2.20.0 Jan 13 20:31:24.010675 ignition[906]: INFO : Stage: files Jan 13 20:31:24.013529 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:24.013529 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:24.013529 ignition[906]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:31:24.018891 ignition[906]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:31:24.018891 ignition[906]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:31:24.022635 ignition[906]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:31:24.022635 ignition[906]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:31:24.022635 ignition[906]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:31:24.022361 unknown[906]: wrote ssh authorized keys file for user: core Jan 13 20:31:24.029789 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:31:24.029789 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:31:24.106610 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:31:24.486862 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:31:24.486862 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:31:24.491266 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:31:25.144832 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:31:25.805366 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:31:25.805366 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:31:25.809369 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:31:26.368984 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:31:29.029716 ignition[906]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:31:29.031101 ignition[906]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:31:29.031838 ignition[906]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:31:29.033584 ignition[906]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:31:29.033584 ignition[906]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:31:29.033584 ignition[906]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:31:29.033584 ignition[906]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:31:29.033584 ignition[906]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:31:29.033584 ignition[906]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:31:29.033584 ignition[906]: INFO : files: files passed Jan 13 20:31:29.033584 ignition[906]: INFO : Ignition finished successfully Jan 13 20:31:29.033949 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:31:29.045711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:31:29.051138 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:31:29.052656 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:31:29.053244 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:31:29.074904 initrd-setup-root-after-ignition[940]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:29.076951 initrd-setup-root-after-ignition[936]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:29.076951 initrd-setup-root-after-ignition[936]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:29.078484 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:31:29.080804 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:31:29.091817 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:31:29.126233 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:31:29.126423 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:31:29.128694 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:31:29.130459 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:31:29.131748 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:31:29.136795 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:31:29.156139 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:31:29.164838 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:31:29.176547 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:31:29.176672 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:31:29.179869 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:29.180421 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:29.182482 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:31:29.184424 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:31:29.184472 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:31:29.186597 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:31:29.187524 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:31:29.189441 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:31:29.191117 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:31:29.192682 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:31:29.194592 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:31:29.196484 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:31:29.198518 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:31:29.200378 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:31:29.202353 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:31:29.204416 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:31:29.204462 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:31:29.206855 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:29.207860 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:29.209573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:31:29.209605 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:29.211916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:31:29.211959 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:31:29.215236 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:31:29.215278 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:31:29.216262 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:31:29.216300 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:31:29.223646 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:31:29.227185 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:31:29.227244 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:29.229203 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:31:29.234789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:31:29.234840 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:29.235387 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:31:29.235427 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:31:29.238847 ignition[961]: INFO : Ignition 2.20.0 Jan 13 20:31:29.238847 ignition[961]: INFO : Stage: umount Jan 13 20:31:29.238847 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:29.238847 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:31:29.242684 ignition[961]: INFO : umount: umount passed Jan 13 20:31:29.242684 ignition[961]: INFO : Ignition finished successfully Jan 13 20:31:29.243789 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:31:29.243880 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:31:29.244719 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:31:29.244786 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:31:29.245355 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:31:29.245395 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:31:29.246468 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:31:29.246505 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:31:29.247011 systemd[1]: Stopped target network.target - Network. Jan 13 20:31:29.249284 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:31:29.249330 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:31:29.250969 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:31:29.251391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:31:29.256596 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:29.257105 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:31:29.257536 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:31:29.258027 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:31:29.258060 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:31:29.260713 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:31:29.260756 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:31:29.261970 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:31:29.262019 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:31:29.262499 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:31:29.262541 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:31:29.263160 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:31:29.263965 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:31:29.265916 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:31:29.266509 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:31:29.266595 systemd-networkd[712]: eth0: DHCPv6 lease lost Jan 13 20:31:29.267325 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:31:29.268530 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:31:29.268645 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:31:29.270261 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:31:29.270309 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:29.271327 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:31:29.271370 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:31:29.279689 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:31:29.282464 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:31:29.282528 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:31:29.283493 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:29.284735 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:31:29.284825 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:31:29.295772 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:31:29.295847 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:29.296504 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:31:29.296545 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:29.297547 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:31:29.297614 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:29.298914 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:31:29.299042 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:29.300118 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:31:29.300206 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:31:29.301886 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:31:29.301936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:29.302968 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:31:29.302998 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:29.303915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:31:29.303955 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:31:29.305451 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:31:29.305489 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:31:29.306479 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:31:29.306520 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:29.314737 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:31:29.315518 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:31:29.315588 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:29.316120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:31:29.316160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:29.321011 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:31:29.321098 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:31:29.322814 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:31:29.333716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:31:29.340698 systemd[1]: Switching root. Jan 13 20:31:29.378669 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 20:31:29.378776 systemd-journald[184]: Journal stopped Jan 13 20:31:31.186172 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:31:31.186231 kernel: SELinux: policy capability open_perms=1 Jan 13 20:31:31.186244 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:31:31.186257 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:31:31.186272 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:31:31.186284 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:31:31.186296 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:31:31.186308 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:31:31.186319 kernel: audit: type=1403 audit(1736800290.166:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:31:31.186336 systemd[1]: Successfully loaded SELinux policy in 78.127ms. Jan 13 20:31:31.186361 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.016ms. Jan 13 20:31:31.186375 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:31:31.186388 systemd[1]: Detected virtualization kvm. Jan 13 20:31:31.186403 systemd[1]: Detected architecture x86-64. Jan 13 20:31:31.186416 systemd[1]: Detected first boot. Jan 13 20:31:31.186429 systemd[1]: Hostname set to . Jan 13 20:31:31.186442 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:31:31.186454 zram_generator::config[1003]: No configuration found. Jan 13 20:31:31.186469 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:31:31.186482 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:31:31.186498 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:31:31.186514 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:31:31.186527 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:31:31.186540 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:31:31.186552 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:31:31.186606 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:31:31.186620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:31:31.186633 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:31:31.186646 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:31:31.186662 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:31:31.186675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:31.186688 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:31.186701 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:31:31.186713 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:31:31.186726 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:31:31.186739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:31:31.186751 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:31:31.186764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:31.186778 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:31:31.186791 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:31:31.186804 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:31:31.186817 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:31:31.186830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:31.186846 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:31:31.186861 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:31:31.186874 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:31:31.186886 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:31:31.186899 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:31:31.186912 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:31.186925 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:31.186937 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:31.186949 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:31:31.186962 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:31:31.186975 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:31:31.186991 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:31:31.187004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:31.187016 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:31:31.187029 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:31:31.187041 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:31:31.187055 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:31:31.187067 systemd[1]: Reached target machines.target - Containers. Jan 13 20:31:31.187080 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:31:31.187095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:31.187107 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:31:31.187120 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:31:31.187132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:31.187145 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:31:31.187158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:31.187171 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:31:31.187183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:31.187198 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:31:31.187211 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:31:31.187223 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:31:31.187236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:31:31.187248 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:31:31.187261 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:31:31.187273 kernel: loop: module loaded Jan 13 20:31:31.187285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:31:31.187298 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:31:31.187312 kernel: fuse: init (API version 7.39) Jan 13 20:31:31.187324 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:31:31.187337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:31:31.187349 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:31:31.187362 systemd[1]: Stopped verity-setup.service. Jan 13 20:31:31.187375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:31.187387 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:31:31.187400 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:31:31.187412 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:31:31.187427 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:31:31.187440 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:31:31.187452 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:31:31.187465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:31.187477 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:31:31.187492 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:31:31.187505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:31.187517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:31.187531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:31.187543 kernel: ACPI: bus type drm_connector registered Jan 13 20:31:31.187572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:31.187585 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:31:31.187598 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:31:31.187611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:31:31.187625 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:31:31.187637 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:31:31.187665 systemd-journald[1096]: Collecting audit messages is disabled. Jan 13 20:31:31.187703 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:31.187717 systemd-journald[1096]: Journal started Jan 13 20:31:31.187743 systemd-journald[1096]: Runtime Journal (/run/log/journal/b1b7cb03d5794b42bcaedff19f05f309) is 8.0M, max 78.3M, 70.3M free. Jan 13 20:31:30.814348 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:31:30.837291 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:31:30.837646 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:31:31.192579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:31.192610 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:31:31.194412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:31.195178 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:31:31.195902 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:31:31.205161 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:31:31.212664 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:31:31.216215 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:31:31.216791 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:31:31.216830 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:31:31.218408 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:31:31.229690 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:31:31.231686 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:31:31.232476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:31.242979 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:31:31.244903 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:31:31.245700 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:31:31.248708 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:31:31.249265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:31:31.250237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:31:31.259763 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:31:31.262485 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:31:31.265714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:31.266882 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:31:31.268792 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:31:31.269583 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:31:31.276210 systemd-journald[1096]: Time spent on flushing to /var/log/journal/b1b7cb03d5794b42bcaedff19f05f309 is 56.154ms for 950 entries. Jan 13 20:31:31.276210 systemd-journald[1096]: System Journal (/var/log/journal/b1b7cb03d5794b42bcaedff19f05f309) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:31:31.385406 systemd-journald[1096]: Received client request to flush runtime journal. Jan 13 20:31:31.385465 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:31:31.281727 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:31:31.300434 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:31:31.301709 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:31:31.312686 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:31:31.314628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:31.330170 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:31:31.387880 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:31:31.402373 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:31:31.403059 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:31:31.431185 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:31:31.432654 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:31:31.438727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:31:31.458664 kernel: loop1: detected capacity change from 0 to 8 Jan 13 20:31:31.473284 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 13 20:31:31.473303 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 13 20:31:31.479590 kernel: loop2: detected capacity change from 0 to 205544 Jan 13 20:31:31.479893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:31.540602 kernel: loop3: detected capacity change from 0 to 141000 Jan 13 20:31:31.640601 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:31:31.696336 kernel: loop5: detected capacity change from 0 to 8 Jan 13 20:31:31.699595 kernel: loop6: detected capacity change from 0 to 205544 Jan 13 20:31:31.736589 kernel: loop7: detected capacity change from 0 to 141000 Jan 13 20:31:31.792715 (sd-merge)[1162]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 20:31:31.794510 (sd-merge)[1162]: Merged extensions into '/usr'. Jan 13 20:31:31.800533 systemd[1]: Reloading requested from client PID 1136 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:31:31.800678 systemd[1]: Reloading... Jan 13 20:31:31.885640 zram_generator::config[1188]: No configuration found. Jan 13 20:31:32.104889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:31:32.164268 systemd[1]: Reloading finished in 362 ms. Jan 13 20:31:32.188570 ldconfig[1131]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:31:32.190009 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:31:32.191271 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:31:32.192172 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:31:32.203758 systemd[1]: Starting ensure-sysext.service... Jan 13 20:31:32.205231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:31:32.208701 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:32.218205 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:31:32.218224 systemd[1]: Reloading... Jan 13 20:31:32.240959 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Jan 13 20:31:32.247136 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:31:32.247435 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:31:32.249518 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:31:32.249929 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 13 20:31:32.250000 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 13 20:31:32.257018 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:31:32.257029 systemd-tmpfiles[1246]: Skipping /boot Jan 13 20:31:32.275580 zram_generator::config[1271]: No configuration found. Jan 13 20:31:32.275084 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:31:32.275098 systemd-tmpfiles[1246]: Skipping /boot Jan 13 20:31:32.425283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1285) Jan 13 20:31:32.455627 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 20:31:32.459686 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:31:32.514585 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:31:32.514659 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:31:32.530207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:31:32.567573 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:31:32.584573 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 20:31:32.584612 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 20:31:32.589584 kernel: Console: switching to colour dummy device 80x25 Jan 13 20:31:32.591779 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:31:32.591819 kernel: [drm] features: -context_init Jan 13 20:31:32.594594 kernel: [drm] number of scanouts: 1 Jan 13 20:31:32.594632 kernel: [drm] number of cap sets: 0 Jan 13 20:31:32.597580 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 20:31:32.607124 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 20:31:32.607166 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:31:32.610580 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:31:32.612533 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:31:32.613143 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:31:32.613607 systemd[1]: Reloading finished in 395 ms. Jan 13 20:31:32.627975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:32.637610 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:32.665096 systemd[1]: Finished ensure-sysext.service. Jan 13 20:31:32.687480 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:32.692742 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:31:32.700824 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:31:32.701039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:32.702719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:32.704729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:31:32.710747 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:32.713750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:32.713973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:32.717602 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:31:32.719706 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:31:32.729715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:31:32.739699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:31:32.743549 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:31:32.752724 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:31:32.754034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:32.754116 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:31:32.755024 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:31:32.756529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:32.756955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:32.757221 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:31:32.757336 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:31:32.757858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:32.758008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:32.759477 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:32.760665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:32.780090 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:31:32.782525 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:31:32.782638 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:31:32.791782 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:31:32.801514 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:31:32.823853 lvm[1389]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:31:32.818984 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:31:32.824701 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:31:32.837807 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:31:32.846938 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:31:32.850995 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:32.863788 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:31:32.868632 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:31:32.877065 augenrules[1411]: No rules Jan 13 20:31:32.879782 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:31:32.880235 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:31:32.881053 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:31:32.887090 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:31:32.907404 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:31:32.913085 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:31:32.949263 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:31:32.981152 systemd-networkd[1374]: lo: Link UP Jan 13 20:31:32.981433 systemd-networkd[1374]: lo: Gained carrier Jan 13 20:31:32.982895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:32.987697 systemd-networkd[1374]: Enumeration completed Jan 13 20:31:32.990622 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:31:32.992879 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:32.992890 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:31:32.994078 systemd-networkd[1374]: eth0: Link UP Jan 13 20:31:32.994149 systemd-networkd[1374]: eth0: Gained carrier Jan 13 20:31:32.994215 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:32.999896 systemd-resolved[1376]: Positive Trust Anchors: Jan 13 20:31:33.000176 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:31:33.000271 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:31:33.003841 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:31:33.008621 systemd-networkd[1374]: eth0: DHCPv4 address 172.24.4.206/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 20:31:33.010725 systemd-resolved[1376]: Using system hostname 'ci-4186-1-0-8-e51fb1a5ac.novalocal'. Jan 13 20:31:33.013284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:31:33.016118 systemd[1]: Reached target network.target - Network. Jan 13 20:31:33.016631 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:33.031870 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:31:33.033526 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:31:33.034147 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:31:33.036668 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:31:33.037194 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:31:33.037634 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:31:33.037664 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:31:33.038123 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:31:33.038822 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:31:33.040485 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:31:33.042116 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:31:33.046176 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:31:33.049849 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:31:33.050300 systemd-timesyncd[1379]: Contacted time server 95.179.212.126:123 (0.flatcar.pool.ntp.org). Jan 13 20:31:33.050365 systemd-timesyncd[1379]: Initial clock synchronization to Mon 2025-01-13 20:31:33.002792 UTC. Jan 13 20:31:33.064129 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:31:33.065443 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:31:33.069058 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:31:33.069590 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:31:33.070137 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:31:33.070173 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:31:33.076644 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:31:33.079798 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:31:33.087704 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:31:33.100687 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:31:33.108816 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:31:33.109473 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:31:33.115487 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:31:33.119913 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:31:33.126210 jq[1440]: false Jan 13 20:31:33.134780 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:31:33.138189 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:31:33.143685 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:31:33.147950 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:31:33.148471 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:31:33.152707 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:31:33.160887 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:31:33.169452 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:31:33.171367 extend-filesystems[1441]: Found loop4 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found loop5 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found loop6 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found loop7 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found vda Jan 13 20:31:33.171367 extend-filesystems[1441]: Found vda1 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found vda2 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found vda3 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found usr Jan 13 20:31:33.171367 extend-filesystems[1441]: Found vda4 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found vda6 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found vda7 Jan 13 20:31:33.171367 extend-filesystems[1441]: Found vda9 Jan 13 20:31:33.171367 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 13 20:31:33.314595 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 20:31:33.314640 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 20:31:33.314659 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1281) Jan 13 20:31:33.170626 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:31:33.199787 dbus-daemon[1437]: [system] SELinux support is enabled Jan 13 20:31:33.315096 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 13 20:31:33.315478 update_engine[1448]: I20250113 20:31:33.214766 1448 main.cc:92] Flatcar Update Engine starting Jan 13 20:31:33.315478 update_engine[1448]: I20250113 20:31:33.230506 1448 update_check_scheduler.cc:74] Next update check in 5m27s Jan 13 20:31:33.171995 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:31:33.318129 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:31:33.318129 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:31:33.318129 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:31:33.318129 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 20:31:33.172421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:31:33.331170 jq[1450]: true Jan 13 20:31:33.331339 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 13 20:31:33.199985 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:31:33.331785 tar[1453]: linux-amd64/helm Jan 13 20:31:33.215935 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:31:33.215962 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:31:33.224792 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:31:33.337632 jq[1467]: true Jan 13 20:31:33.224815 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:31:33.225481 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:31:33.227651 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:31:33.232307 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:31:33.268957 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:31:33.269147 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:31:33.281361 systemd-logind[1447]: New seat seat0. Jan 13 20:31:33.297523 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:31:33.297542 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:31:33.302148 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:31:33.306303 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:31:33.306992 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:31:33.378965 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:31:33.390373 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:31:33.403835 systemd[1]: Starting sshkeys.service... Jan 13 20:31:33.422511 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:31:33.434965 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:31:33.504970 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:31:33.743900 containerd[1470]: time="2025-01-13T20:31:33.743796906Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:31:33.789580 containerd[1470]: time="2025-01-13T20:31:33.789524878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:33.794775 containerd[1470]: time="2025-01-13T20:31:33.794745780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:33.794884 containerd[1470]: time="2025-01-13T20:31:33.794867749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:31:33.794991 containerd[1470]: time="2025-01-13T20:31:33.794975681Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:31:33.795247 containerd[1470]: time="2025-01-13T20:31:33.795229237Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:31:33.795312 containerd[1470]: time="2025-01-13T20:31:33.795299158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:33.795486 containerd[1470]: time="2025-01-13T20:31:33.795467253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:33.795544 containerd[1470]: time="2025-01-13T20:31:33.795531394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:33.795820 containerd[1470]: time="2025-01-13T20:31:33.795800068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:33.795929 containerd[1470]: time="2025-01-13T20:31:33.795913480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:33.795991 containerd[1470]: time="2025-01-13T20:31:33.795977029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:33.796050 containerd[1470]: time="2025-01-13T20:31:33.796036491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:33.796223 containerd[1470]: time="2025-01-13T20:31:33.796205428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:33.796580 containerd[1470]: time="2025-01-13T20:31:33.796542249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:33.796730 containerd[1470]: time="2025-01-13T20:31:33.796710715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:33.796932 containerd[1470]: time="2025-01-13T20:31:33.796773724Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:31:33.796932 containerd[1470]: time="2025-01-13T20:31:33.796858042Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:31:33.796932 containerd[1470]: time="2025-01-13T20:31:33.796906272Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:31:33.806734 containerd[1470]: time="2025-01-13T20:31:33.806713045Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:31:33.806827 containerd[1470]: time="2025-01-13T20:31:33.806812231Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:31:33.807125 containerd[1470]: time="2025-01-13T20:31:33.806910185Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:31:33.807125 containerd[1470]: time="2025-01-13T20:31:33.806941944Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:31:33.807125 containerd[1470]: time="2025-01-13T20:31:33.806959747Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:31:33.807125 containerd[1470]: time="2025-01-13T20:31:33.807076246Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:31:33.807688 containerd[1470]: time="2025-01-13T20:31:33.807671332Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807823988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807845248Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807860918Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807875385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807889551Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807902686Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807917644Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807933414Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807947169Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807960434Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807973158Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.807993727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.808007903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808274 containerd[1470]: time="2025-01-13T20:31:33.808022400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808036998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808053980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808068447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808081311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808100327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808113992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808129271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808141834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808154949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808167503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808182701Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808203430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808218759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.808584 containerd[1470]: time="2025-01-13T20:31:33.808230771Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:31:33.809581 containerd[1470]: time="2025-01-13T20:31:33.808888094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:31:33.809581 containerd[1470]: time="2025-01-13T20:31:33.808913812Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:31:33.809581 containerd[1470]: time="2025-01-13T20:31:33.808925264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:31:33.809581 containerd[1470]: time="2025-01-13T20:31:33.808950391Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:31:33.809581 containerd[1470]: time="2025-01-13T20:31:33.808961482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.809581 containerd[1470]: time="2025-01-13T20:31:33.808973935Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:31:33.809581 containerd[1470]: time="2025-01-13T20:31:33.808983984Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:31:33.809581 containerd[1470]: time="2025-01-13T20:31:33.808996147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:31:33.809771 containerd[1470]: time="2025-01-13T20:31:33.809279418Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:31:33.809771 containerd[1470]: time="2025-01-13T20:31:33.809332798Z" level=info msg="Connect containerd service" Jan 13 20:31:33.809771 containerd[1470]: time="2025-01-13T20:31:33.809367483Z" level=info msg="using legacy CRI server" Jan 13 20:31:33.809771 containerd[1470]: time="2025-01-13T20:31:33.809374867Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:31:33.809771 containerd[1470]: time="2025-01-13T20:31:33.809480946Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:31:33.813199 containerd[1470]: time="2025-01-13T20:31:33.813147182Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:31:33.813498 containerd[1470]: time="2025-01-13T20:31:33.813347017Z" level=info msg="Start subscribing containerd event" Jan 13 20:31:33.813498 containerd[1470]: time="2025-01-13T20:31:33.813402862Z" level=info msg="Start recovering state" Jan 13 20:31:33.813498 containerd[1470]: time="2025-01-13T20:31:33.813481259Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:31:33.813602 containerd[1470]: time="2025-01-13T20:31:33.813536172Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:31:33.813682 containerd[1470]: time="2025-01-13T20:31:33.813480327Z" level=info msg="Start event monitor" Jan 13 20:31:33.814612 containerd[1470]: time="2025-01-13T20:31:33.813729835Z" level=info msg="Start snapshots syncer" Jan 13 20:31:33.814612 containerd[1470]: time="2025-01-13T20:31:33.813744653Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:31:33.814612 containerd[1470]: time="2025-01-13T20:31:33.813752016Z" level=info msg="Start streaming server" Jan 13 20:31:33.814690 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:31:33.816479 containerd[1470]: time="2025-01-13T20:31:33.816438936Z" level=info msg="containerd successfully booted in 0.073684s" Jan 13 20:31:34.018889 tar[1453]: linux-amd64/LICENSE Jan 13 20:31:34.019068 tar[1453]: linux-amd64/README.md Jan 13 20:31:34.030318 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:31:34.067880 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:31:34.105141 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:31:34.113580 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:31:34.125712 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:31:34.125883 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:31:34.136914 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:31:34.145998 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:31:34.157408 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:31:34.162900 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:31:34.166281 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:31:34.916804 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 13 20:31:34.920780 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:31:34.928192 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:31:34.944269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:34.953605 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:31:35.006685 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:31:36.903383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:36.918452 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:31:38.564516 kubelet[1555]: E0113 20:31:38.564354 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:31:38.567686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:31:38.568008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:31:38.568609 systemd[1]: kubelet.service: Consumed 2.279s CPU time. Jan 13 20:31:38.766945 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:31:38.781364 systemd[1]: Started sshd@0-172.24.4.206:22-172.24.4.1:55686.service - OpenSSH per-connection server daemon (172.24.4.1:55686). Jan 13 20:31:39.181885 agetty[1534]: failed to open credentials directory Jan 13 20:31:39.185752 agetty[1535]: failed to open credentials directory Jan 13 20:31:39.225864 login[1534]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jan 13 20:31:39.231611 login[1535]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:31:39.265080 systemd-logind[1447]: New session 1 of user core. Jan 13 20:31:39.271388 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:31:39.285966 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:31:39.299002 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:31:39.311055 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:31:39.314657 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:31:39.428592 systemd[1571]: Queued start job for default target default.target. Jan 13 20:31:39.438544 systemd[1571]: Created slice app.slice - User Application Slice. Jan 13 20:31:39.438723 systemd[1571]: Reached target paths.target - Paths. Jan 13 20:31:39.438807 systemd[1571]: Reached target timers.target - Timers. Jan 13 20:31:39.440092 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:31:39.451690 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:31:39.451897 systemd[1571]: Reached target sockets.target - Sockets. Jan 13 20:31:39.451923 systemd[1571]: Reached target basic.target - Basic System. Jan 13 20:31:39.451980 systemd[1571]: Reached target default.target - Main User Target. Jan 13 20:31:39.452019 systemd[1571]: Startup finished in 131ms. Jan 13 20:31:39.452135 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:31:39.454871 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:31:39.852406 sshd[1563]: Accepted publickey for core from 172.24.4.1 port 55686 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:39.855034 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:39.865658 systemd-logind[1447]: New session 3 of user core. Jan 13 20:31:39.876007 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:31:40.170989 coreos-metadata[1436]: Jan 13 20:31:40.170 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:31:40.219933 coreos-metadata[1436]: Jan 13 20:31:40.219 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 20:31:40.226902 login[1534]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:31:40.238057 systemd-logind[1447]: New session 2 of user core. Jan 13 20:31:40.254302 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:31:40.410353 coreos-metadata[1436]: Jan 13 20:31:40.410 INFO Fetch successful Jan 13 20:31:40.410526 coreos-metadata[1436]: Jan 13 20:31:40.410 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:31:40.427031 coreos-metadata[1436]: Jan 13 20:31:40.426 INFO Fetch successful Jan 13 20:31:40.427031 coreos-metadata[1436]: Jan 13 20:31:40.426 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 20:31:40.436019 coreos-metadata[1436]: Jan 13 20:31:40.435 INFO Fetch successful Jan 13 20:31:40.436132 coreos-metadata[1436]: Jan 13 20:31:40.436 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 20:31:40.447109 coreos-metadata[1436]: Jan 13 20:31:40.447 INFO Fetch successful Jan 13 20:31:40.447218 coreos-metadata[1436]: Jan 13 20:31:40.447 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 20:31:40.457527 coreos-metadata[1436]: Jan 13 20:31:40.457 INFO Fetch successful Jan 13 20:31:40.457719 coreos-metadata[1436]: Jan 13 20:31:40.457 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 20:31:40.468030 coreos-metadata[1436]: Jan 13 20:31:40.467 INFO Fetch successful Jan 13 20:31:40.475042 systemd[1]: Started sshd@1-172.24.4.206:22-172.24.4.1:55702.service - OpenSSH per-connection server daemon (172.24.4.1:55702). Jan 13 20:31:40.532520 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:31:40.534280 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:31:40.535282 coreos-metadata[1499]: Jan 13 20:31:40.534 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:31:40.551588 coreos-metadata[1499]: Jan 13 20:31:40.551 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 20:31:40.562965 coreos-metadata[1499]: Jan 13 20:31:40.562 INFO Fetch successful Jan 13 20:31:40.562965 coreos-metadata[1499]: Jan 13 20:31:40.562 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:31:40.573030 coreos-metadata[1499]: Jan 13 20:31:40.572 INFO Fetch successful Jan 13 20:31:40.577364 unknown[1499]: wrote ssh authorized keys file for user: core Jan 13 20:31:40.611194 update-ssh-keys[1612]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:31:40.611525 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:31:40.613987 systemd[1]: Finished sshkeys.service. Jan 13 20:31:40.614810 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:31:40.615432 systemd[1]: Startup finished in 1.118s (kernel) + 17.380s (initrd) + 10.525s (userspace) = 29.024s. Jan 13 20:31:42.604311 sshd[1604]: Accepted publickey for core from 172.24.4.1 port 55702 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:42.607509 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:42.619692 systemd-logind[1447]: New session 4 of user core. Jan 13 20:31:42.631915 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:31:43.386604 sshd[1617]: Connection closed by 172.24.4.1 port 55702 Jan 13 20:31:43.388669 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:43.398771 systemd[1]: sshd@1-172.24.4.206:22-172.24.4.1:55702.service: Deactivated successfully. Jan 13 20:31:43.401905 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:31:43.404854 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:31:43.411057 systemd[1]: Started sshd@2-172.24.4.206:22-172.24.4.1:55710.service - OpenSSH per-connection server daemon (172.24.4.1:55710). Jan 13 20:31:43.413990 systemd-logind[1447]: Removed session 4. Jan 13 20:31:44.780025 sshd[1622]: Accepted publickey for core from 172.24.4.1 port 55710 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:44.781720 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:44.788517 systemd-logind[1447]: New session 5 of user core. Jan 13 20:31:44.793682 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:31:45.562470 sshd[1624]: Connection closed by 172.24.4.1 port 55710 Jan 13 20:31:45.563899 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:45.574402 systemd[1]: sshd@2-172.24.4.206:22-172.24.4.1:55710.service: Deactivated successfully. Jan 13 20:31:45.577799 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:31:45.581700 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:31:45.588119 systemd[1]: Started sshd@3-172.24.4.206:22-172.24.4.1:43478.service - OpenSSH per-connection server daemon (172.24.4.1:43478). Jan 13 20:31:45.590968 systemd-logind[1447]: Removed session 5. Jan 13 20:31:46.960859 sshd[1629]: Accepted publickey for core from 172.24.4.1 port 43478 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:46.963393 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:46.973964 systemd-logind[1447]: New session 6 of user core. Jan 13 20:31:46.983866 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:31:47.691838 sshd[1631]: Connection closed by 172.24.4.1 port 43478 Jan 13 20:31:47.691705 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:47.703982 systemd[1]: sshd@3-172.24.4.206:22-172.24.4.1:43478.service: Deactivated successfully. Jan 13 20:31:47.707030 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:31:47.709887 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:31:47.715102 systemd[1]: Started sshd@4-172.24.4.206:22-172.24.4.1:43488.service - OpenSSH per-connection server daemon (172.24.4.1:43488). Jan 13 20:31:47.718098 systemd-logind[1447]: Removed session 6. Jan 13 20:31:48.785683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:31:48.802123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:49.103328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:49.116078 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:31:49.213997 kubelet[1645]: E0113 20:31:49.213905 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:31:49.218643 sshd[1636]: Accepted publickey for core from 172.24.4.1 port 43488 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:49.220332 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:49.225203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:31:49.226043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:31:49.234463 systemd-logind[1447]: New session 7 of user core. Jan 13 20:31:49.238872 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:31:49.618812 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:31:49.619453 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:31:49.637640 sudo[1654]: pam_unix(sudo:session): session closed for user root Jan 13 20:31:49.901800 sshd[1653]: Connection closed by 172.24.4.1 port 43488 Jan 13 20:31:49.902898 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:49.915198 systemd[1]: sshd@4-172.24.4.206:22-172.24.4.1:43488.service: Deactivated successfully. Jan 13 20:31:49.918185 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:31:49.921938 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:31:49.927135 systemd[1]: Started sshd@5-172.24.4.206:22-172.24.4.1:43494.service - OpenSSH per-connection server daemon (172.24.4.1:43494). Jan 13 20:31:49.929364 systemd-logind[1447]: Removed session 7. Jan 13 20:31:51.113726 sshd[1659]: Accepted publickey for core from 172.24.4.1 port 43494 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:51.116471 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:51.125962 systemd-logind[1447]: New session 8 of user core. Jan 13 20:31:51.138861 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:31:51.540518 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:31:51.541198 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:31:51.547895 sudo[1663]: pam_unix(sudo:session): session closed for user root Jan 13 20:31:51.558874 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:31:51.559508 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:31:51.592280 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:31:51.646240 augenrules[1685]: No rules Jan 13 20:31:51.647275 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:31:51.647716 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:31:51.650180 sudo[1662]: pam_unix(sudo:session): session closed for user root Jan 13 20:31:51.814552 sshd[1661]: Connection closed by 172.24.4.1 port 43494 Jan 13 20:31:51.816289 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:51.828749 systemd[1]: sshd@5-172.24.4.206:22-172.24.4.1:43494.service: Deactivated successfully. Jan 13 20:31:51.831757 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:31:51.833473 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:31:51.845187 systemd[1]: Started sshd@6-172.24.4.206:22-172.24.4.1:43510.service - OpenSSH per-connection server daemon (172.24.4.1:43510). Jan 13 20:31:51.848047 systemd-logind[1447]: Removed session 8. Jan 13 20:31:53.029074 sshd[1693]: Accepted publickey for core from 172.24.4.1 port 43510 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:31:53.031516 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:53.040669 systemd-logind[1447]: New session 9 of user core. Jan 13 20:31:53.049851 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:31:53.438685 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:31:53.439321 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:31:54.086991 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:31:54.087753 (dockerd)[1715]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:31:54.705262 dockerd[1715]: time="2025-01-13T20:31:54.704886257Z" level=info msg="Starting up" Jan 13 20:31:54.854472 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport45525216-merged.mount: Deactivated successfully. Jan 13 20:31:54.887620 systemd[1]: var-lib-docker-metacopy\x2dcheck241436866-merged.mount: Deactivated successfully. Jan 13 20:31:54.927371 dockerd[1715]: time="2025-01-13T20:31:54.927014888Z" level=info msg="Loading containers: start." Jan 13 20:31:55.136622 kernel: Initializing XFRM netlink socket Jan 13 20:31:55.226392 systemd-networkd[1374]: docker0: Link UP Jan 13 20:31:55.262905 dockerd[1715]: time="2025-01-13T20:31:55.262766887Z" level=info msg="Loading containers: done." Jan 13 20:31:55.289426 dockerd[1715]: time="2025-01-13T20:31:55.289292365Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:31:55.289577 dockerd[1715]: time="2025-01-13T20:31:55.289512040Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:31:55.289791 dockerd[1715]: time="2025-01-13T20:31:55.289744308Z" level=info msg="Daemon has completed initialization" Jan 13 20:31:55.362989 dockerd[1715]: time="2025-01-13T20:31:55.362864692Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:31:55.363702 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:31:57.152355 containerd[1470]: time="2025-01-13T20:31:57.152259015Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:31:57.825335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263272108.mount: Deactivated successfully. Jan 13 20:31:59.284765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:31:59.290737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:59.399146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:59.409876 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:31:59.650381 kubelet[1965]: E0113 20:31:59.650287 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:31:59.654344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:31:59.654478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:31:59.942920 containerd[1470]: time="2025-01-13T20:31:59.942826775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:59.944685 containerd[1470]: time="2025-01-13T20:31:59.944658421Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975491" Jan 13 20:31:59.946371 containerd[1470]: time="2025-01-13T20:31:59.946326016Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:59.949873 containerd[1470]: time="2025-01-13T20:31:59.949800795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:59.951060 containerd[1470]: time="2025-01-13T20:31:59.951034632Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.798710799s" Jan 13 20:31:59.951263 containerd[1470]: time="2025-01-13T20:31:59.951131807Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 20:31:59.953610 containerd[1470]: time="2025-01-13T20:31:59.953526257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:32:01.973910 containerd[1470]: time="2025-01-13T20:32:01.973857147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:01.975321 containerd[1470]: time="2025-01-13T20:32:01.975141130Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702165" Jan 13 20:32:01.977060 containerd[1470]: time="2025-01-13T20:32:01.976998842Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:01.980538 containerd[1470]: time="2025-01-13T20:32:01.980470395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:01.982915 containerd[1470]: time="2025-01-13T20:32:01.982846601Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 2.02928726s" Jan 13 20:32:01.982915 containerd[1470]: time="2025-01-13T20:32:01.982877853Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 20:32:01.983339 containerd[1470]: time="2025-01-13T20:32:01.983257989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:32:03.859052 containerd[1470]: time="2025-01-13T20:32:03.858958065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:03.860427 containerd[1470]: time="2025-01-13T20:32:03.860385201Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652075" Jan 13 20:32:03.861684 containerd[1470]: time="2025-01-13T20:32:03.861617802Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:03.864976 containerd[1470]: time="2025-01-13T20:32:03.864905782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:03.866268 containerd[1470]: time="2025-01-13T20:32:03.866167043Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.882880895s" Jan 13 20:32:03.866268 containerd[1470]: time="2025-01-13T20:32:03.866196636Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 20:32:03.866904 containerd[1470]: time="2025-01-13T20:32:03.866778361Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:32:05.295750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475179671.mount: Deactivated successfully. Jan 13 20:32:05.857650 containerd[1470]: time="2025-01-13T20:32:05.857423464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:05.859123 containerd[1470]: time="2025-01-13T20:32:05.858929686Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Jan 13 20:32:05.860850 containerd[1470]: time="2025-01-13T20:32:05.860787837Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:05.863511 containerd[1470]: time="2025-01-13T20:32:05.863422404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:05.864240 containerd[1470]: time="2025-01-13T20:32:05.864019265Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.997044916s" Jan 13 20:32:05.864240 containerd[1470]: time="2025-01-13T20:32:05.864052354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 20:32:05.864591 containerd[1470]: time="2025-01-13T20:32:05.864542647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:32:06.544306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216830049.mount: Deactivated successfully. Jan 13 20:32:08.453947 containerd[1470]: time="2025-01-13T20:32:08.453838034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:08.484509 containerd[1470]: time="2025-01-13T20:32:08.484350533Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 20:32:08.528254 containerd[1470]: time="2025-01-13T20:32:08.528128829Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:08.571177 containerd[1470]: time="2025-01-13T20:32:08.571054508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:08.574663 containerd[1470]: time="2025-01-13T20:32:08.574480796Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.709876215s" Jan 13 20:32:08.574663 containerd[1470]: time="2025-01-13T20:32:08.574555651Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:32:08.576980 containerd[1470]: time="2025-01-13T20:32:08.576777697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:32:09.686496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469809474.mount: Deactivated successfully. Jan 13 20:32:09.690676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:32:09.697940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:09.704155 containerd[1470]: time="2025-01-13T20:32:09.703894224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:09.707484 containerd[1470]: time="2025-01-13T20:32:09.707373485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 13 20:32:09.709245 containerd[1470]: time="2025-01-13T20:32:09.709044472Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:09.718676 containerd[1470]: time="2025-01-13T20:32:09.717884981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:09.719703 containerd[1470]: time="2025-01-13T20:32:09.719645790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.142807689s" Jan 13 20:32:09.719980 containerd[1470]: time="2025-01-13T20:32:09.719938588Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 20:32:09.723110 containerd[1470]: time="2025-01-13T20:32:09.722439406Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:32:09.853868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:09.854512 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:32:09.922859 kubelet[2044]: E0113 20:32:09.922685 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:32:09.925505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:32:09.925861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:32:10.566654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983136448.mount: Deactivated successfully. Jan 13 20:32:13.162627 containerd[1470]: time="2025-01-13T20:32:13.162480951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:13.164127 containerd[1470]: time="2025-01-13T20:32:13.163933348Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 13 20:32:13.165886 containerd[1470]: time="2025-01-13T20:32:13.165824904Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:13.169856 containerd[1470]: time="2025-01-13T20:32:13.169773067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:13.171420 containerd[1470]: time="2025-01-13T20:32:13.171308961Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.448814489s" Jan 13 20:32:13.171420 containerd[1470]: time="2025-01-13T20:32:13.171337307Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 20:32:17.675676 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:17.686040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:17.747254 systemd[1]: Reloading requested from client PID 2130 ('systemctl') (unit session-9.scope)... Jan 13 20:32:17.747271 systemd[1]: Reloading... Jan 13 20:32:17.854590 zram_generator::config[2165]: No configuration found. Jan 13 20:32:18.014815 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:32:18.098517 systemd[1]: Reloading finished in 350 ms. Jan 13 20:32:18.457216 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:32:18.457389 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:32:18.457926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:18.471338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:18.605460 update_engine[1448]: I20250113 20:32:18.605387 1448 update_attempter.cc:509] Updating boot flags... Jan 13 20:32:18.983664 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2237) Jan 13 20:32:19.056777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2237) Jan 13 20:32:19.397030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:19.415097 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:32:19.535793 kubelet[2249]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:32:19.535793 kubelet[2249]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:32:19.535793 kubelet[2249]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:32:19.539393 kubelet[2249]: I0113 20:32:19.539282 2249 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:32:19.916688 kubelet[2249]: I0113 20:32:19.916328 2249 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:32:19.916688 kubelet[2249]: I0113 20:32:19.916385 2249 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:32:19.917172 kubelet[2249]: I0113 20:32:19.916955 2249 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:32:20.114635 kubelet[2249]: E0113 20:32:20.114514 2249 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:20.119254 kubelet[2249]: I0113 20:32:20.118267 2249 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:32:20.146854 kubelet[2249]: E0113 20:32:20.146779 2249 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:32:20.147108 kubelet[2249]: I0113 20:32:20.147080 2249 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:32:20.158063 kubelet[2249]: I0113 20:32:20.158009 2249 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:32:20.158797 kubelet[2249]: I0113 20:32:20.158441 2249 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:32:20.159014 kubelet[2249]: I0113 20:32:20.158777 2249 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:32:20.159271 kubelet[2249]: I0113 20:32:20.158849 2249 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-8-e51fb1a5ac.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:32:20.159271 kubelet[2249]: I0113 20:32:20.159253 2249 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:32:20.159271 kubelet[2249]: I0113 20:32:20.159275 2249 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:32:20.159652 kubelet[2249]: I0113 20:32:20.159463 2249 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:32:20.165434 kubelet[2249]: I0113 20:32:20.164954 2249 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:32:20.165434 kubelet[2249]: I0113 20:32:20.165031 2249 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:32:20.165434 kubelet[2249]: I0113 20:32:20.165107 2249 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:32:20.165434 kubelet[2249]: I0113 20:32:20.165144 2249 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:32:20.173279 kubelet[2249]: W0113 20:32:20.172710 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-8-e51fb1a5ac.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.206:6443: connect: connection refused Jan 13 20:32:20.173279 kubelet[2249]: E0113 20:32:20.172840 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-8-e51fb1a5ac.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:20.178125 kubelet[2249]: I0113 20:32:20.177875 2249 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:32:20.182317 kubelet[2249]: I0113 20:32:20.182281 2249 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:32:20.185612 kubelet[2249]: W0113 20:32:20.184117 2249 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:32:20.185612 kubelet[2249]: I0113 20:32:20.185470 2249 server.go:1269] "Started kubelet" Jan 13 20:32:20.186076 kubelet[2249]: W0113 20:32:20.185991 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.206:6443: connect: connection refused Jan 13 20:32:20.186285 kubelet[2249]: E0113 20:32:20.186245 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:20.192410 kubelet[2249]: I0113 20:32:20.192347 2249 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:32:20.194850 kubelet[2249]: I0113 20:32:20.194682 2249 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:32:20.199791 kubelet[2249]: I0113 20:32:20.199744 2249 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:32:20.200029 kubelet[2249]: I0113 20:32:20.199928 2249 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:32:20.200515 kubelet[2249]: I0113 20:32:20.200482 2249 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:32:20.208057 kubelet[2249]: E0113 20:32:20.201270 2249 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.206:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.206:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-8-e51fb1a5ac.novalocal.181a5ab51a87b73e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-8-e51fb1a5ac.novalocal,UID:ci-4186-1-0-8-e51fb1a5ac.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-8-e51fb1a5ac.novalocal,},FirstTimestamp:2025-01-13 20:32:20.185429822 +0000 UTC m=+0.763641328,LastTimestamp:2025-01-13 20:32:20.185429822 +0000 UTC m=+0.763641328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-8-e51fb1a5ac.novalocal,}" Jan 13 20:32:20.211681 kubelet[2249]: I0113 20:32:20.211629 2249 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:32:20.214161 kubelet[2249]: E0113 20:32:20.214127 2249 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:32:20.214584 kubelet[2249]: E0113 20:32:20.214537 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:20.215465 kubelet[2249]: I0113 20:32:20.214984 2249 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:32:20.215465 kubelet[2249]: I0113 20:32:20.215242 2249 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:32:20.215465 kubelet[2249]: I0113 20:32:20.215308 2249 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:32:20.216773 kubelet[2249]: W0113 20:32:20.216577 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.206:6443: connect: connection refused Jan 13 20:32:20.217039 kubelet[2249]: E0113 20:32:20.216976 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:20.217634 kubelet[2249]: I0113 20:32:20.217611 2249 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:32:20.217931 kubelet[2249]: I0113 20:32:20.217855 2249 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:32:20.218750 kubelet[2249]: E0113 20:32:20.218680 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-8-e51fb1a5ac.novalocal?timeout=10s\": dial tcp 172.24.4.206:6443: connect: connection refused" interval="200ms" Jan 13 20:32:20.220573 kubelet[2249]: I0113 20:32:20.219358 2249 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:32:20.239919 kubelet[2249]: I0113 20:32:20.239824 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:32:20.241061 kubelet[2249]: I0113 20:32:20.241005 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:32:20.241115 kubelet[2249]: I0113 20:32:20.241072 2249 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:32:20.241115 kubelet[2249]: I0113 20:32:20.241111 2249 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:32:20.241231 kubelet[2249]: E0113 20:32:20.241195 2249 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:32:20.247860 kubelet[2249]: W0113 20:32:20.247215 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.206:6443: connect: connection refused Jan 13 20:32:20.248089 kubelet[2249]: E0113 20:32:20.247843 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:20.263576 kubelet[2249]: I0113 20:32:20.263520 2249 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:32:20.263680 kubelet[2249]: I0113 20:32:20.263609 2249 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:32:20.263680 kubelet[2249]: I0113 20:32:20.263646 2249 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:32:20.269333 kubelet[2249]: I0113 20:32:20.269292 2249 policy_none.go:49] "None policy: Start" Jan 13 20:32:20.270264 kubelet[2249]: I0113 20:32:20.270226 2249 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:32:20.270332 kubelet[2249]: I0113 20:32:20.270275 2249 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:32:20.278768 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:32:20.290321 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:32:20.296344 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:32:20.305434 kubelet[2249]: I0113 20:32:20.305227 2249 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:32:20.306577 kubelet[2249]: I0113 20:32:20.306270 2249 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:32:20.306577 kubelet[2249]: I0113 20:32:20.306285 2249 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:32:20.307240 kubelet[2249]: I0113 20:32:20.307070 2249 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:32:20.309799 kubelet[2249]: E0113 20:32:20.309703 2249 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:20.361944 systemd[1]: Created slice kubepods-burstable-podb63eba8f048fdb2408991364e0184b74.slice - libcontainer container kubepods-burstable-podb63eba8f048fdb2408991364e0184b74.slice. Jan 13 20:32:20.376663 systemd[1]: Created slice kubepods-burstable-podd696fdd9ddff55c8b53ebf2da81f318b.slice - libcontainer container kubepods-burstable-podd696fdd9ddff55c8b53ebf2da81f318b.slice. Jan 13 20:32:20.396553 systemd[1]: Created slice kubepods-burstable-pod34c39517b6e603d575e4a39423fe46e3.slice - libcontainer container kubepods-burstable-pod34c39517b6e603d575e4a39423fe46e3.slice. Jan 13 20:32:20.409890 kubelet[2249]: I0113 20:32:20.409457 2249 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.410308 kubelet[2249]: E0113 20:32:20.410239 2249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.206:6443/api/v1/nodes\": dial tcp 172.24.4.206:6443: connect: connection refused" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.419663 kubelet[2249]: E0113 20:32:20.419526 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-8-e51fb1a5ac.novalocal?timeout=10s\": dial tcp 172.24.4.206:6443: connect: connection refused" interval="400ms" Jan 13 20:32:20.517188 kubelet[2249]: I0113 20:32:20.517008 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b63eba8f048fdb2408991364e0184b74-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"b63eba8f048fdb2408991364e0184b74\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.517188 kubelet[2249]: I0113 20:32:20.517074 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b63eba8f048fdb2408991364e0184b74-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"b63eba8f048fdb2408991364e0184b74\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.517188 kubelet[2249]: I0113 20:32:20.517126 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.518213 kubelet[2249]: I0113 20:32:20.517798 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.518213 kubelet[2249]: I0113 20:32:20.517916 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b63eba8f048fdb2408991364e0184b74-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"b63eba8f048fdb2408991364e0184b74\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.518213 kubelet[2249]: I0113 20:32:20.517964 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.518213 kubelet[2249]: I0113 20:32:20.518010 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.518550 kubelet[2249]: I0113 20:32:20.518056 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.518550 kubelet[2249]: I0113 20:32:20.518098 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34c39517b6e603d575e4a39423fe46e3-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"34c39517b6e603d575e4a39423fe46e3\") " pod="kube-system/kube-scheduler-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.613920 kubelet[2249]: I0113 20:32:20.613855 2249 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.614934 kubelet[2249]: E0113 20:32:20.614418 2249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.206:6443/api/v1/nodes\": dial tcp 172.24.4.206:6443: connect: connection refused" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:20.674319 containerd[1470]: time="2025-01-13T20:32:20.674249471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal,Uid:b63eba8f048fdb2408991364e0184b74,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:20.694255 containerd[1470]: time="2025-01-13T20:32:20.694129246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal,Uid:d696fdd9ddff55c8b53ebf2da81f318b,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:20.702207 containerd[1470]: time="2025-01-13T20:32:20.702127721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-8-e51fb1a5ac.novalocal,Uid:34c39517b6e603d575e4a39423fe46e3,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:20.820225 kubelet[2249]: E0113 20:32:20.820138 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-8-e51fb1a5ac.novalocal?timeout=10s\": dial tcp 172.24.4.206:6443: connect: connection refused" interval="800ms" Jan 13 20:32:21.019058 kubelet[2249]: I0113 20:32:21.018386 2249 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:21.019058 kubelet[2249]: E0113 20:32:21.018962 2249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.206:6443/api/v1/nodes\": dial tcp 172.24.4.206:6443: connect: connection refused" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:21.250685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420108335.mount: Deactivated successfully. Jan 13 20:32:21.264120 kubelet[2249]: W0113 20:32:21.263971 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.206:6443: connect: connection refused Jan 13 20:32:21.264398 kubelet[2249]: E0113 20:32:21.264147 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:21.270552 containerd[1470]: time="2025-01-13T20:32:21.270435609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:21.274008 containerd[1470]: time="2025-01-13T20:32:21.273947806Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:21.276395 kubelet[2249]: W0113 20:32:21.276270 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.206:6443: connect: connection refused Jan 13 20:32:21.276395 kubelet[2249]: E0113 20:32:21.276358 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:21.279222 containerd[1470]: time="2025-01-13T20:32:21.279056460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 20:32:21.281152 containerd[1470]: time="2025-01-13T20:32:21.281062867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:32:21.285246 containerd[1470]: time="2025-01-13T20:32:21.285171047Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:21.295718 containerd[1470]: time="2025-01-13T20:32:21.295480585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:32:21.296967 containerd[1470]: time="2025-01-13T20:32:21.296812814Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:21.305902 containerd[1470]: time="2025-01-13T20:32:21.305765680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:32:21.313616 containerd[1470]: time="2025-01-13T20:32:21.313264236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 638.793483ms" Jan 13 20:32:21.319274 containerd[1470]: time="2025-01-13T20:32:21.319186129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 624.845709ms" Jan 13 20:32:21.320853 containerd[1470]: time="2025-01-13T20:32:21.320798844Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.497202ms" Jan 13 20:32:21.534156 containerd[1470]: time="2025-01-13T20:32:21.533432709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:21.534156 containerd[1470]: time="2025-01-13T20:32:21.533491480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:21.534156 containerd[1470]: time="2025-01-13T20:32:21.533505344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:21.534156 containerd[1470]: time="2025-01-13T20:32:21.533612750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:21.549553 containerd[1470]: time="2025-01-13T20:32:21.548931589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:21.549553 containerd[1470]: time="2025-01-13T20:32:21.549047631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:21.549553 containerd[1470]: time="2025-01-13T20:32:21.549092578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:21.549553 containerd[1470]: time="2025-01-13T20:32:21.549257013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:21.564191 containerd[1470]: time="2025-01-13T20:32:21.563995186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:21.564352 containerd[1470]: time="2025-01-13T20:32:21.564171201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:21.564440 containerd[1470]: time="2025-01-13T20:32:21.564338501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:21.564874 containerd[1470]: time="2025-01-13T20:32:21.564672810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:21.572441 systemd[1]: Started cri-containerd-4b8ca74e04e8015713b9262a7589d85bd5a1e7c5c5e0cd18188225675f8ce504.scope - libcontainer container 4b8ca74e04e8015713b9262a7589d85bd5a1e7c5c5e0cd18188225675f8ce504. Jan 13 20:32:21.578152 systemd[1]: Started cri-containerd-8915cf5458bf074192c7695cac0c9c7c84810a74aa0105c4fb819ee4d35e8d28.scope - libcontainer container 8915cf5458bf074192c7695cac0c9c7c84810a74aa0105c4fb819ee4d35e8d28. Jan 13 20:32:21.598099 kubelet[2249]: W0113 20:32:21.597715 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.206:6443: connect: connection refused Jan 13 20:32:21.597862 systemd[1]: Started cri-containerd-52785719dccd13e6ceb24278ce0008b12f559b8851062eb2075b92111aecdec0.scope - libcontainer container 52785719dccd13e6ceb24278ce0008b12f559b8851062eb2075b92111aecdec0. Jan 13 20:32:21.598509 kubelet[2249]: E0113 20:32:21.598467 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:21.621411 kubelet[2249]: E0113 20:32:21.621369 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-8-e51fb1a5ac.novalocal?timeout=10s\": dial tcp 172.24.4.206:6443: connect: connection refused" interval="1.6s" Jan 13 20:32:21.647688 containerd[1470]: time="2025-01-13T20:32:21.647008125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal,Uid:b63eba8f048fdb2408991364e0184b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"8915cf5458bf074192c7695cac0c9c7c84810a74aa0105c4fb819ee4d35e8d28\"" Jan 13 20:32:21.653845 containerd[1470]: time="2025-01-13T20:32:21.653807408Z" level=info msg="CreateContainer within sandbox \"8915cf5458bf074192c7695cac0c9c7c84810a74aa0105c4fb819ee4d35e8d28\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:32:21.668311 containerd[1470]: time="2025-01-13T20:32:21.668202576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal,Uid:d696fdd9ddff55c8b53ebf2da81f318b,Namespace:kube-system,Attempt:0,} returns sandbox id \"52785719dccd13e6ceb24278ce0008b12f559b8851062eb2075b92111aecdec0\"" Jan 13 20:32:21.672335 containerd[1470]: time="2025-01-13T20:32:21.672199013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-8-e51fb1a5ac.novalocal,Uid:34c39517b6e603d575e4a39423fe46e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b8ca74e04e8015713b9262a7589d85bd5a1e7c5c5e0cd18188225675f8ce504\"" Jan 13 20:32:21.672592 containerd[1470]: time="2025-01-13T20:32:21.672470152Z" level=info msg="CreateContainer within sandbox \"52785719dccd13e6ceb24278ce0008b12f559b8851062eb2075b92111aecdec0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:32:21.674972 containerd[1470]: time="2025-01-13T20:32:21.674936897Z" level=info msg="CreateContainer within sandbox \"4b8ca74e04e8015713b9262a7589d85bd5a1e7c5c5e0cd18188225675f8ce504\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:32:21.700261 containerd[1470]: time="2025-01-13T20:32:21.700222571Z" level=info msg="CreateContainer within sandbox \"8915cf5458bf074192c7695cac0c9c7c84810a74aa0105c4fb819ee4d35e8d28\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d03dfcf03e39c807e9755007b4ca68373dde241757a86e69197cc348c557eba\"" Jan 13 20:32:21.701231 containerd[1470]: time="2025-01-13T20:32:21.701192842Z" level=info msg="StartContainer for \"4d03dfcf03e39c807e9755007b4ca68373dde241757a86e69197cc348c557eba\"" Jan 13 20:32:21.717252 containerd[1470]: time="2025-01-13T20:32:21.717111881Z" level=info msg="CreateContainer within sandbox \"4b8ca74e04e8015713b9262a7589d85bd5a1e7c5c5e0cd18188225675f8ce504\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78367edbb96a9097675b85893c134e5080252a513ac534aa33fff9f175c4d968\"" Jan 13 20:32:21.719162 containerd[1470]: time="2025-01-13T20:32:21.718084046Z" level=info msg="StartContainer for \"78367edbb96a9097675b85893c134e5080252a513ac534aa33fff9f175c4d968\"" Jan 13 20:32:21.719344 containerd[1470]: time="2025-01-13T20:32:21.719323785Z" level=info msg="CreateContainer within sandbox \"52785719dccd13e6ceb24278ce0008b12f559b8851062eb2075b92111aecdec0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"73099fcd200f1f001cc29fc36eb9eeac4e4db13baf25ece28d760895f448d406\"" Jan 13 20:32:21.719755 containerd[1470]: time="2025-01-13T20:32:21.719736981Z" level=info msg="StartContainer for \"73099fcd200f1f001cc29fc36eb9eeac4e4db13baf25ece28d760895f448d406\"" Jan 13 20:32:21.730775 systemd[1]: Started cri-containerd-4d03dfcf03e39c807e9755007b4ca68373dde241757a86e69197cc348c557eba.scope - libcontainer container 4d03dfcf03e39c807e9755007b4ca68373dde241757a86e69197cc348c557eba. Jan 13 20:32:21.738814 kubelet[2249]: W0113 20:32:21.738749 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-8-e51fb1a5ac.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.206:6443: connect: connection refused Jan 13 20:32:21.739040 kubelet[2249]: E0113 20:32:21.739021 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-8-e51fb1a5ac.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.206:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:32:21.756794 systemd[1]: Started cri-containerd-78367edbb96a9097675b85893c134e5080252a513ac534aa33fff9f175c4d968.scope - libcontainer container 78367edbb96a9097675b85893c134e5080252a513ac534aa33fff9f175c4d968. Jan 13 20:32:21.769739 systemd[1]: Started cri-containerd-73099fcd200f1f001cc29fc36eb9eeac4e4db13baf25ece28d760895f448d406.scope - libcontainer container 73099fcd200f1f001cc29fc36eb9eeac4e4db13baf25ece28d760895f448d406. Jan 13 20:32:21.813448 containerd[1470]: time="2025-01-13T20:32:21.813051413Z" level=info msg="StartContainer for \"4d03dfcf03e39c807e9755007b4ca68373dde241757a86e69197cc348c557eba\" returns successfully" Jan 13 20:32:21.821725 kubelet[2249]: I0113 20:32:21.821332 2249 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:21.822768 kubelet[2249]: E0113 20:32:21.822716 2249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.206:6443/api/v1/nodes\": dial tcp 172.24.4.206:6443: connect: connection refused" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:21.826321 containerd[1470]: time="2025-01-13T20:32:21.826163118Z" level=info msg="StartContainer for \"78367edbb96a9097675b85893c134e5080252a513ac534aa33fff9f175c4d968\" returns successfully" Jan 13 20:32:21.847196 containerd[1470]: time="2025-01-13T20:32:21.847156420Z" level=info msg="StartContainer for \"73099fcd200f1f001cc29fc36eb9eeac4e4db13baf25ece28d760895f448d406\" returns successfully" Jan 13 20:32:23.424977 kubelet[2249]: I0113 20:32:23.424945 2249 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:23.955216 kubelet[2249]: E0113 20:32:23.955174 2249 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:23.980084 kubelet[2249]: I0113 20:32:23.980044 2249 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:23.980084 kubelet[2249]: E0113 20:32:23.980086 2249 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\": node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:24.002535 kubelet[2249]: E0113 20:32:24.002504 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:24.103276 kubelet[2249]: E0113 20:32:24.103219 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:24.203501 kubelet[2249]: E0113 20:32:24.203393 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:24.304427 kubelet[2249]: E0113 20:32:24.304345 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:24.405627 kubelet[2249]: E0113 20:32:24.405503 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:24.505856 kubelet[2249]: E0113 20:32:24.505781 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" not found" Jan 13 20:32:25.176201 kubelet[2249]: I0113 20:32:25.176131 2249 apiserver.go:52] "Watching apiserver" Jan 13 20:32:25.217046 kubelet[2249]: I0113 20:32:25.216937 2249 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:32:26.983101 systemd[1]: Reloading requested from client PID 2520 ('systemctl') (unit session-9.scope)... Jan 13 20:32:26.983138 systemd[1]: Reloading... Jan 13 20:32:27.100603 zram_generator::config[2562]: No configuration found. Jan 13 20:32:27.245998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:32:27.347332 systemd[1]: Reloading finished in 363 ms. Jan 13 20:32:27.368898 kubelet[2249]: W0113 20:32:27.367845 2249 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:32:27.392318 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:27.407586 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:32:27.407834 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:27.407881 systemd[1]: kubelet.service: Consumed 1.168s CPU time, 119.1M memory peak, 0B memory swap peak. Jan 13 20:32:27.413901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:27.613773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:27.624984 (kubelet)[2622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:32:27.678105 kubelet[2622]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:32:27.678105 kubelet[2622]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:32:27.678105 kubelet[2622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:32:27.678105 kubelet[2622]: I0113 20:32:27.677876 2622 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:32:27.685211 kubelet[2622]: I0113 20:32:27.684849 2622 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:32:27.685211 kubelet[2622]: I0113 20:32:27.684882 2622 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:32:27.685418 kubelet[2622]: I0113 20:32:27.685406 2622 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:32:27.688160 kubelet[2622]: I0113 20:32:27.688145 2622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:32:27.697966 kubelet[2622]: I0113 20:32:27.697938 2622 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:32:27.710673 kubelet[2622]: E0113 20:32:27.710593 2622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:32:27.711011 kubelet[2622]: I0113 20:32:27.710710 2622 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:32:27.718737 kubelet[2622]: I0113 20:32:27.718668 2622 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:32:27.719125 kubelet[2622]: I0113 20:32:27.719000 2622 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:32:27.719710 kubelet[2622]: I0113 20:32:27.719229 2622 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:32:27.719710 kubelet[2622]: I0113 20:32:27.719259 2622 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-8-e51fb1a5ac.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:32:27.719710 kubelet[2622]: I0113 20:32:27.719453 2622 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:32:27.719710 kubelet[2622]: I0113 20:32:27.719464 2622 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:32:27.719938 kubelet[2622]: I0113 20:32:27.719493 2622 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:32:27.719938 kubelet[2622]: I0113 20:32:27.719617 2622 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:32:27.719938 kubelet[2622]: I0113 20:32:27.719632 2622 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:32:27.719938 kubelet[2622]: I0113 20:32:27.719658 2622 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:32:27.719938 kubelet[2622]: I0113 20:32:27.719671 2622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:32:27.725915 kubelet[2622]: I0113 20:32:27.725884 2622 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:32:27.728580 kubelet[2622]: I0113 20:32:27.726505 2622 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:32:27.732484 kubelet[2622]: I0113 20:32:27.732469 2622 server.go:1269] "Started kubelet" Jan 13 20:32:27.737582 kubelet[2622]: I0113 20:32:27.737520 2622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:32:27.748594 kubelet[2622]: I0113 20:32:27.746577 2622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:32:27.748594 kubelet[2622]: I0113 20:32:27.748468 2622 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:32:27.757715 kubelet[2622]: I0113 20:32:27.757655 2622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:32:27.757935 kubelet[2622]: I0113 20:32:27.757915 2622 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:32:27.759505 kubelet[2622]: I0113 20:32:27.759441 2622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:32:27.764395 kubelet[2622]: I0113 20:32:27.764367 2622 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:32:27.772433 kubelet[2622]: I0113 20:32:27.772249 2622 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:32:27.773230 kubelet[2622]: I0113 20:32:27.773193 2622 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:32:27.776963 kubelet[2622]: I0113 20:32:27.776903 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:32:27.780342 kubelet[2622]: I0113 20:32:27.780303 2622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:32:27.782345 kubelet[2622]: I0113 20:32:27.781430 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:32:27.782345 kubelet[2622]: I0113 20:32:27.781575 2622 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:32:27.782345 kubelet[2622]: I0113 20:32:27.781592 2622 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:32:27.782345 kubelet[2622]: E0113 20:32:27.781627 2622 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:32:27.786968 kubelet[2622]: I0113 20:32:27.785447 2622 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:32:27.786968 kubelet[2622]: I0113 20:32:27.785467 2622 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:32:27.788416 kubelet[2622]: E0113 20:32:27.787839 2622 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:32:27.831920 kubelet[2622]: I0113 20:32:27.831901 2622 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:32:27.832073 kubelet[2622]: I0113 20:32:27.832061 2622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:32:27.832139 kubelet[2622]: I0113 20:32:27.832131 2622 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:32:27.832355 kubelet[2622]: I0113 20:32:27.832340 2622 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:32:27.832440 kubelet[2622]: I0113 20:32:27.832416 2622 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:32:27.832498 kubelet[2622]: I0113 20:32:27.832490 2622 policy_none.go:49] "None policy: Start" Jan 13 20:32:27.833188 kubelet[2622]: I0113 20:32:27.833170 2622 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:32:27.833257 kubelet[2622]: I0113 20:32:27.833196 2622 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:32:27.833385 kubelet[2622]: I0113 20:32:27.833360 2622 state_mem.go:75] "Updated machine memory state" Jan 13 20:32:27.837493 kubelet[2622]: I0113 20:32:27.837465 2622 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:32:27.837766 kubelet[2622]: I0113 20:32:27.837636 2622 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:32:27.837766 kubelet[2622]: I0113 20:32:27.837652 2622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:32:27.838205 kubelet[2622]: I0113 20:32:27.838119 2622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:32:27.889825 kubelet[2622]: W0113 20:32:27.889726 2622 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:32:27.892768 kubelet[2622]: W0113 20:32:27.892730 2622 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:32:27.893461 kubelet[2622]: W0113 20:32:27.893423 2622 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 20:32:27.893526 kubelet[2622]: E0113 20:32:27.893498 2622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.951043 kubelet[2622]: I0113 20:32:27.951004 2622 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.953064 sudo[2656]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:32:27.955340 sudo[2656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:32:27.963235 kubelet[2622]: I0113 20:32:27.963196 2622 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.963357 kubelet[2622]: I0113 20:32:27.963309 2622 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.973884 kubelet[2622]: I0113 20:32:27.973820 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.973999 kubelet[2622]: I0113 20:32:27.973886 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.973999 kubelet[2622]: I0113 20:32:27.973945 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.973999 kubelet[2622]: I0113 20:32:27.973987 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34c39517b6e603d575e4a39423fe46e3-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"34c39517b6e603d575e4a39423fe46e3\") " pod="kube-system/kube-scheduler-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.974096 kubelet[2622]: I0113 20:32:27.974022 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b63eba8f048fdb2408991364e0184b74-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"b63eba8f048fdb2408991364e0184b74\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.974127 kubelet[2622]: I0113 20:32:27.974056 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b63eba8f048fdb2408991364e0184b74-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"b63eba8f048fdb2408991364e0184b74\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.974157 kubelet[2622]: I0113 20:32:27.974128 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b63eba8f048fdb2408991364e0184b74-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"b63eba8f048fdb2408991364e0184b74\") " pod="kube-system/kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.974232 kubelet[2622]: I0113 20:32:27.974165 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:27.974271 kubelet[2622]: I0113 20:32:27.974254 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d696fdd9ddff55c8b53ebf2da81f318b-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal\" (UID: \"d696fdd9ddff55c8b53ebf2da81f318b\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" Jan 13 20:32:28.549741 sudo[2656]: pam_unix(sudo:session): session closed for user root Jan 13 20:32:28.721339 kubelet[2622]: I0113 20:32:28.721274 2622 apiserver.go:52] "Watching apiserver" Jan 13 20:32:28.772937 kubelet[2622]: I0113 20:32:28.772881 2622 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:32:28.865643 kubelet[2622]: I0113 20:32:28.864114 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-8-e51fb1a5ac.novalocal" podStartSLOduration=1.864082943 podStartE2EDuration="1.864082943s" podCreationTimestamp="2025-01-13 20:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:28.863940829 +0000 UTC m=+1.234927416" watchObservedRunningTime="2025-01-13 20:32:28.864082943 +0000 UTC m=+1.235069600" Jan 13 20:32:28.887310 kubelet[2622]: I0113 20:32:28.887217 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-8-e51fb1a5ac.novalocal" podStartSLOduration=1.887191247 podStartE2EDuration="1.887191247s" podCreationTimestamp="2025-01-13 20:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:28.884297944 +0000 UTC m=+1.255284521" watchObservedRunningTime="2025-01-13 20:32:28.887191247 +0000 UTC m=+1.258177905" Jan 13 20:32:28.902955 kubelet[2622]: I0113 20:32:28.902911 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-8-e51fb1a5ac.novalocal" podStartSLOduration=1.9028968000000002 podStartE2EDuration="1.9028968s" podCreationTimestamp="2025-01-13 20:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:28.90214621 +0000 UTC m=+1.273132787" watchObservedRunningTime="2025-01-13 20:32:28.9028968 +0000 UTC m=+1.273883397" Jan 13 20:32:30.865424 sudo[1696]: pam_unix(sudo:session): session closed for user root Jan 13 20:32:31.023764 sshd[1695]: Connection closed by 172.24.4.1 port 43510 Jan 13 20:32:31.024817 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jan 13 20:32:31.034534 systemd[1]: sshd@6-172.24.4.206:22-172.24.4.1:43510.service: Deactivated successfully. Jan 13 20:32:31.041385 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:32:31.041839 systemd[1]: session-9.scope: Consumed 7.610s CPU time, 153.7M memory peak, 0B memory swap peak. Jan 13 20:32:31.044366 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:32:31.046847 systemd-logind[1447]: Removed session 9. Jan 13 20:32:31.241729 kubelet[2622]: I0113 20:32:31.241118 2622 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:32:31.242399 containerd[1470]: time="2025-01-13T20:32:31.241641316Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:32:31.242793 kubelet[2622]: I0113 20:32:31.242264 2622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:32:31.916148 systemd[1]: Created slice kubepods-besteffort-poddbd8fde0_6457_498c_9e9a_b5591d2fed02.slice - libcontainer container kubepods-besteffort-poddbd8fde0_6457_498c_9e9a_b5591d2fed02.slice. Jan 13 20:32:31.978376 systemd[1]: Created slice kubepods-burstable-pod67cbd5cf_a245_4e2c_8a84_51926d16224d.slice - libcontainer container kubepods-burstable-pod67cbd5cf_a245_4e2c_8a84_51926d16224d.slice. Jan 13 20:32:31.988386 kubelet[2622]: W0113 20:32:31.988349 2622 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object Jan 13 20:32:31.988386 kubelet[2622]: E0113 20:32:31.988389 2622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object" logger="UnhandledError" Jan 13 20:32:31.995580 kubelet[2622]: W0113 20:32:31.994014 2622 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object Jan 13 20:32:31.995816 kubelet[2622]: E0113 20:32:31.995757 2622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object" logger="UnhandledError" Jan 13 20:32:31.995816 kubelet[2622]: W0113 20:32:31.994226 2622 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object Jan 13 20:32:31.995930 kubelet[2622]: E0113 20:32:31.995843 2622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object" logger="UnhandledError" Jan 13 20:32:31.999137 kubelet[2622]: I0113 20:32:31.999111 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cni-path\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999211 kubelet[2622]: I0113 20:32:31.999142 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-lib-modules\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999211 kubelet[2622]: I0113 20:32:31.999162 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdhh\" (UniqueName: \"kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-kube-api-access-jhdhh\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999211 kubelet[2622]: I0113 20:32:31.999183 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6pd\" (UniqueName: \"kubernetes.io/projected/dbd8fde0-6457-498c-9e9a-b5591d2fed02-kube-api-access-pm6pd\") pod \"kube-proxy-x4ldr\" (UID: \"dbd8fde0-6457-498c-9e9a-b5591d2fed02\") " pod="kube-system/kube-proxy-x4ldr" Jan 13 20:32:31.999211 kubelet[2622]: I0113 20:32:31.999203 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-run\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999333 kubelet[2622]: I0113 20:32:31.999220 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-host-proc-sys-kernel\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999333 kubelet[2622]: I0113 20:32:31.999237 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-bpf-maps\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999333 kubelet[2622]: I0113 20:32:31.999255 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbd8fde0-6457-498c-9e9a-b5591d2fed02-xtables-lock\") pod \"kube-proxy-x4ldr\" (UID: \"dbd8fde0-6457-498c-9e9a-b5591d2fed02\") " pod="kube-system/kube-proxy-x4ldr" Jan 13 20:32:31.999333 kubelet[2622]: I0113 20:32:31.999273 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-hostproc\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999333 kubelet[2622]: I0113 20:32:31.999289 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67cbd5cf-a245-4e2c-8a84-51926d16224d-clustermesh-secrets\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999333 kubelet[2622]: I0113 20:32:31.999306 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-xtables-lock\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999515 kubelet[2622]: I0113 20:32:31.999323 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-host-proc-sys-net\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999515 kubelet[2622]: I0113 20:32:31.999342 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbd8fde0-6457-498c-9e9a-b5591d2fed02-lib-modules\") pod \"kube-proxy-x4ldr\" (UID: \"dbd8fde0-6457-498c-9e9a-b5591d2fed02\") " pod="kube-system/kube-proxy-x4ldr" Jan 13 20:32:31.999515 kubelet[2622]: I0113 20:32:31.999373 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-hubble-tls\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999515 kubelet[2622]: I0113 20:32:31.999406 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbd8fde0-6457-498c-9e9a-b5591d2fed02-kube-proxy\") pod \"kube-proxy-x4ldr\" (UID: \"dbd8fde0-6457-498c-9e9a-b5591d2fed02\") " pod="kube-system/kube-proxy-x4ldr" Jan 13 20:32:31.999515 kubelet[2622]: I0113 20:32:31.999423 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-etc-cni-netd\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999515 kubelet[2622]: I0113 20:32:31.999442 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-cgroup\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:31.999711 kubelet[2622]: I0113 20:32:31.999463 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-config-path\") pod \"cilium-7twbk\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " pod="kube-system/cilium-7twbk" Jan 13 20:32:32.225452 containerd[1470]: time="2025-01-13T20:32:32.225019014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4ldr,Uid:dbd8fde0-6457-498c-9e9a-b5591d2fed02,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:32.263423 containerd[1470]: time="2025-01-13T20:32:32.263251913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:32.264265 containerd[1470]: time="2025-01-13T20:32:32.263431889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:32.264265 containerd[1470]: time="2025-01-13T20:32:32.263498168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:32.264265 containerd[1470]: time="2025-01-13T20:32:32.264004853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:32.319577 systemd[1]: Started cri-containerd-84cbb66e16fe70e26795c449f7995dd1cbba70c9dbf9a65247112c17124b263d.scope - libcontainer container 84cbb66e16fe70e26795c449f7995dd1cbba70c9dbf9a65247112c17124b263d. Jan 13 20:32:32.362232 systemd[1]: Created slice kubepods-besteffort-podcc867dce_fdb3_46b8_a8c8_7ee7973687bf.slice - libcontainer container kubepods-besteffort-podcc867dce_fdb3_46b8_a8c8_7ee7973687bf.slice. Jan 13 20:32:32.375382 containerd[1470]: time="2025-01-13T20:32:32.375190937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4ldr,Uid:dbd8fde0-6457-498c-9e9a-b5591d2fed02,Namespace:kube-system,Attempt:0,} returns sandbox id \"84cbb66e16fe70e26795c449f7995dd1cbba70c9dbf9a65247112c17124b263d\"" Jan 13 20:32:32.378503 containerd[1470]: time="2025-01-13T20:32:32.378365269Z" level=info msg="CreateContainer within sandbox \"84cbb66e16fe70e26795c449f7995dd1cbba70c9dbf9a65247112c17124b263d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:32:32.400589 containerd[1470]: time="2025-01-13T20:32:32.400510201Z" level=info msg="CreateContainer within sandbox \"84cbb66e16fe70e26795c449f7995dd1cbba70c9dbf9a65247112c17124b263d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb3b29c1ad2f35385fbd979c7145ac1f2a873de08a1653ff10cbba0b7111c03c\"" Jan 13 20:32:32.401392 containerd[1470]: time="2025-01-13T20:32:32.401136493Z" level=info msg="StartContainer for \"fb3b29c1ad2f35385fbd979c7145ac1f2a873de08a1653ff10cbba0b7111c03c\"" Jan 13 20:32:32.403954 kubelet[2622]: I0113 20:32:32.403872 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc867dce-fdb3-46b8-a8c8-7ee7973687bf-cilium-config-path\") pod \"cilium-operator-5d85765b45-xfmzd\" (UID: \"cc867dce-fdb3-46b8-a8c8-7ee7973687bf\") " pod="kube-system/cilium-operator-5d85765b45-xfmzd" Jan 13 20:32:32.405068 kubelet[2622]: I0113 20:32:32.404799 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl5hs\" (UniqueName: \"kubernetes.io/projected/cc867dce-fdb3-46b8-a8c8-7ee7973687bf-kube-api-access-wl5hs\") pod \"cilium-operator-5d85765b45-xfmzd\" (UID: \"cc867dce-fdb3-46b8-a8c8-7ee7973687bf\") " pod="kube-system/cilium-operator-5d85765b45-xfmzd" Jan 13 20:32:32.432716 systemd[1]: Started cri-containerd-fb3b29c1ad2f35385fbd979c7145ac1f2a873de08a1653ff10cbba0b7111c03c.scope - libcontainer container fb3b29c1ad2f35385fbd979c7145ac1f2a873de08a1653ff10cbba0b7111c03c. Jan 13 20:32:32.468437 containerd[1470]: time="2025-01-13T20:32:32.468395177Z" level=info msg="StartContainer for \"fb3b29c1ad2f35385fbd979c7145ac1f2a873de08a1653ff10cbba0b7111c03c\" returns successfully" Jan 13 20:32:32.969224 containerd[1470]: time="2025-01-13T20:32:32.969113618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xfmzd,Uid:cc867dce-fdb3-46b8-a8c8-7ee7973687bf,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:33.027854 containerd[1470]: time="2025-01-13T20:32:33.027211110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:33.028116 containerd[1470]: time="2025-01-13T20:32:33.027532401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:33.028196 containerd[1470]: time="2025-01-13T20:32:33.028098345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:33.028736 containerd[1470]: time="2025-01-13T20:32:33.028604190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:33.061973 systemd[1]: Started cri-containerd-8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10.scope - libcontainer container 8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10. Jan 13 20:32:33.100926 kubelet[2622]: E0113 20:32:33.100629 2622 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 20:32:33.100926 kubelet[2622]: E0113 20:32:33.100652 2622 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-7twbk: failed to sync secret cache: timed out waiting for the condition Jan 13 20:32:33.100926 kubelet[2622]: E0113 20:32:33.100719 2622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-hubble-tls podName:67cbd5cf-a245-4e2c-8a84-51926d16224d nodeName:}" failed. No retries permitted until 2025-01-13 20:32:33.60069194 +0000 UTC m=+5.971678517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-hubble-tls") pod "cilium-7twbk" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:32:33.108628 containerd[1470]: time="2025-01-13T20:32:33.108546044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xfmzd,Uid:cc867dce-fdb3-46b8-a8c8-7ee7973687bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10\"" Jan 13 20:32:33.112632 containerd[1470]: time="2025-01-13T20:32:33.111082515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:32:33.783293 containerd[1470]: time="2025-01-13T20:32:33.782655570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7twbk,Uid:67cbd5cf-a245-4e2c-8a84-51926d16224d,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:33.844986 containerd[1470]: time="2025-01-13T20:32:33.844045607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:32:33.844986 containerd[1470]: time="2025-01-13T20:32:33.844155927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:32:33.844986 containerd[1470]: time="2025-01-13T20:32:33.844203273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:33.845503 containerd[1470]: time="2025-01-13T20:32:33.844437217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:32:33.883738 systemd[1]: Started cri-containerd-bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af.scope - libcontainer container bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af. Jan 13 20:32:33.913215 containerd[1470]: time="2025-01-13T20:32:33.913070292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7twbk,Uid:67cbd5cf-a245-4e2c-8a84-51926d16224d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\"" Jan 13 20:32:34.118442 kubelet[2622]: I0113 20:32:34.117351 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x4ldr" podStartSLOduration=3.11731795 podStartE2EDuration="3.11731795s" podCreationTimestamp="2025-01-13 20:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:32.848482691 +0000 UTC m=+5.219469278" watchObservedRunningTime="2025-01-13 20:32:34.11731795 +0000 UTC m=+6.488304587" Jan 13 20:32:38.284477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789889679.mount: Deactivated successfully. Jan 13 20:32:38.851102 containerd[1470]: time="2025-01-13T20:32:38.851007222Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:38.852208 containerd[1470]: time="2025-01-13T20:32:38.852152702Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906577" Jan 13 20:32:38.853280 containerd[1470]: time="2025-01-13T20:32:38.853203455Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:38.855294 containerd[1470]: time="2025-01-13T20:32:38.855263085Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.744127444s" Jan 13 20:32:38.855589 containerd[1470]: time="2025-01-13T20:32:38.855391945Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:32:38.857166 containerd[1470]: time="2025-01-13T20:32:38.856952756Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:32:38.858339 containerd[1470]: time="2025-01-13T20:32:38.858163647Z" level=info msg="CreateContainer within sandbox \"8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:32:38.883168 containerd[1470]: time="2025-01-13T20:32:38.883132170Z" level=info msg="CreateContainer within sandbox \"8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\"" Jan 13 20:32:38.884831 containerd[1470]: time="2025-01-13T20:32:38.883844434Z" level=info msg="StartContainer for \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\"" Jan 13 20:32:38.918689 systemd[1]: Started cri-containerd-39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64.scope - libcontainer container 39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64. Jan 13 20:32:38.979815 containerd[1470]: time="2025-01-13T20:32:38.979767820Z" level=info msg="StartContainer for \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\" returns successfully" Jan 13 20:32:39.874996 kubelet[2622]: I0113 20:32:39.874755 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xfmzd" podStartSLOduration=2.127942232 podStartE2EDuration="7.874737972s" podCreationTimestamp="2025-01-13 20:32:32 +0000 UTC" firstStartedPulling="2025-01-13 20:32:33.109617123 +0000 UTC m=+5.480603700" lastFinishedPulling="2025-01-13 20:32:38.856412853 +0000 UTC m=+11.227399440" observedRunningTime="2025-01-13 20:32:39.873896768 +0000 UTC m=+12.244883345" watchObservedRunningTime="2025-01-13 20:32:39.874737972 +0000 UTC m=+12.245724560" Jan 13 20:32:43.952099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776918425.mount: Deactivated successfully. Jan 13 20:32:52.196135 containerd[1470]: time="2025-01-13T20:32:52.195735779Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:52.197878 containerd[1470]: time="2025-01-13T20:32:52.197776162Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734111" Jan 13 20:32:52.200030 containerd[1470]: time="2025-01-13T20:32:52.199841924Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:32:52.201957 containerd[1470]: time="2025-01-13T20:32:52.201373479Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.344389955s" Jan 13 20:32:52.201957 containerd[1470]: time="2025-01-13T20:32:52.201414987Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:32:52.209928 containerd[1470]: time="2025-01-13T20:32:52.209849973Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:32:52.239895 containerd[1470]: time="2025-01-13T20:32:52.239787239Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\"" Jan 13 20:32:52.241208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555072007.mount: Deactivated successfully. Jan 13 20:32:52.242048 containerd[1470]: time="2025-01-13T20:32:52.241282005Z" level=info msg="StartContainer for \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\"" Jan 13 20:32:52.287800 systemd[1]: Started cri-containerd-491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562.scope - libcontainer container 491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562. Jan 13 20:32:52.320655 containerd[1470]: time="2025-01-13T20:32:52.320511684Z" level=info msg="StartContainer for \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\" returns successfully" Jan 13 20:32:52.327036 systemd[1]: cri-containerd-491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562.scope: Deactivated successfully. Jan 13 20:32:53.229029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562-rootfs.mount: Deactivated successfully. Jan 13 20:32:53.363629 containerd[1470]: time="2025-01-13T20:32:53.363401243Z" level=info msg="shim disconnected" id=491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562 namespace=k8s.io Jan 13 20:32:53.363629 containerd[1470]: time="2025-01-13T20:32:53.363520074Z" level=warning msg="cleaning up after shim disconnected" id=491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562 namespace=k8s.io Jan 13 20:32:53.363629 containerd[1470]: time="2025-01-13T20:32:53.363553426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:32:53.913239 containerd[1470]: time="2025-01-13T20:32:53.912432842Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:32:53.952554 containerd[1470]: time="2025-01-13T20:32:53.952478346Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\"" Jan 13 20:32:53.955728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950584154.mount: Deactivated successfully. Jan 13 20:32:53.958490 containerd[1470]: time="2025-01-13T20:32:53.957863007Z" level=info msg="StartContainer for \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\"" Jan 13 20:32:54.012742 systemd[1]: Started cri-containerd-f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335.scope - libcontainer container f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335. Jan 13 20:32:54.045053 containerd[1470]: time="2025-01-13T20:32:54.044984215Z" level=info msg="StartContainer for \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\" returns successfully" Jan 13 20:32:54.052975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:32:54.053240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:32:54.053301 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:32:54.060268 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:32:54.062310 systemd[1]: cri-containerd-f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335.scope: Deactivated successfully. Jan 13 20:32:54.074731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:32:54.093729 containerd[1470]: time="2025-01-13T20:32:54.093655888Z" level=info msg="shim disconnected" id=f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335 namespace=k8s.io Jan 13 20:32:54.093729 containerd[1470]: time="2025-01-13T20:32:54.093704129Z" level=warning msg="cleaning up after shim disconnected" id=f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335 namespace=k8s.io Jan 13 20:32:54.093729 containerd[1470]: time="2025-01-13T20:32:54.093718565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:32:54.229895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335-rootfs.mount: Deactivated successfully. Jan 13 20:32:54.924195 containerd[1470]: time="2025-01-13T20:32:54.921476052Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:32:54.973617 containerd[1470]: time="2025-01-13T20:32:54.973193345Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\"" Jan 13 20:32:54.978973 containerd[1470]: time="2025-01-13T20:32:54.978852989Z" level=info msg="StartContainer for \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\"" Jan 13 20:32:55.023759 systemd[1]: Started cri-containerd-63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075.scope - libcontainer container 63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075. Jan 13 20:32:55.053161 systemd[1]: cri-containerd-63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075.scope: Deactivated successfully. Jan 13 20:32:55.059791 containerd[1470]: time="2025-01-13T20:32:55.059695457Z" level=info msg="StartContainer for \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\" returns successfully" Jan 13 20:32:55.090889 containerd[1470]: time="2025-01-13T20:32:55.090813861Z" level=info msg="shim disconnected" id=63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075 namespace=k8s.io Jan 13 20:32:55.090889 containerd[1470]: time="2025-01-13T20:32:55.090877319Z" level=warning msg="cleaning up after shim disconnected" id=63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075 namespace=k8s.io Jan 13 20:32:55.090889 containerd[1470]: time="2025-01-13T20:32:55.090887659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:32:55.229427 systemd[1]: run-containerd-runc-k8s.io-63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075-runc.X7badb.mount: Deactivated successfully. Jan 13 20:32:55.230095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075-rootfs.mount: Deactivated successfully. Jan 13 20:32:55.929423 containerd[1470]: time="2025-01-13T20:32:55.929267966Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:32:55.974545 containerd[1470]: time="2025-01-13T20:32:55.972091082Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\"" Jan 13 20:32:55.978667 containerd[1470]: time="2025-01-13T20:32:55.976103046Z" level=info msg="StartContainer for \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\"" Jan 13 20:32:56.030746 systemd[1]: Started cri-containerd-fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b.scope - libcontainer container fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b. Jan 13 20:32:56.054432 systemd[1]: cri-containerd-fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b.scope: Deactivated successfully. Jan 13 20:32:56.058423 containerd[1470]: time="2025-01-13T20:32:56.058394686Z" level=info msg="StartContainer for \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\" returns successfully" Jan 13 20:32:56.084806 containerd[1470]: time="2025-01-13T20:32:56.084753249Z" level=info msg="shim disconnected" id=fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b namespace=k8s.io Jan 13 20:32:56.085037 containerd[1470]: time="2025-01-13T20:32:56.085020088Z" level=warning msg="cleaning up after shim disconnected" id=fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b namespace=k8s.io Jan 13 20:32:56.085156 containerd[1470]: time="2025-01-13T20:32:56.085090539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:32:56.231141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b-rootfs.mount: Deactivated successfully. Jan 13 20:32:56.942013 containerd[1470]: time="2025-01-13T20:32:56.941934811Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:32:57.019608 containerd[1470]: time="2025-01-13T20:32:57.014630331Z" level=info msg="CreateContainer within sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\"" Jan 13 20:32:57.019608 containerd[1470]: time="2025-01-13T20:32:57.018656583Z" level=info msg="StartContainer for \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\"" Jan 13 20:32:57.017751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1832157324.mount: Deactivated successfully. Jan 13 20:32:57.059812 systemd[1]: Started cri-containerd-a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b.scope - libcontainer container a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b. Jan 13 20:32:57.097104 containerd[1470]: time="2025-01-13T20:32:57.096991118Z" level=info msg="StartContainer for \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\" returns successfully" Jan 13 20:32:57.244849 kubelet[2622]: I0113 20:32:57.243842 2622 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:32:57.299421 systemd[1]: Created slice kubepods-burstable-pod3b4c5301_619b_4fa1_82a3_990676633834.slice - libcontainer container kubepods-burstable-pod3b4c5301_619b_4fa1_82a3_990676633834.slice. Jan 13 20:32:57.308586 systemd[1]: Created slice kubepods-burstable-pod8bb89f21_ce55_4923_9e56_f940c186059f.slice - libcontainer container kubepods-burstable-pod8bb89f21_ce55_4923_9e56_f940c186059f.slice. Jan 13 20:32:57.384442 kubelet[2622]: I0113 20:32:57.384241 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfrd\" (UniqueName: \"kubernetes.io/projected/8bb89f21-ce55-4923-9e56-f940c186059f-kube-api-access-5rfrd\") pod \"coredns-6f6b679f8f-g4w2v\" (UID: \"8bb89f21-ce55-4923-9e56-f940c186059f\") " pod="kube-system/coredns-6f6b679f8f-g4w2v" Jan 13 20:32:57.384442 kubelet[2622]: I0113 20:32:57.384299 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gv8v\" (UniqueName: \"kubernetes.io/projected/3b4c5301-619b-4fa1-82a3-990676633834-kube-api-access-7gv8v\") pod \"coredns-6f6b679f8f-bbndn\" (UID: \"3b4c5301-619b-4fa1-82a3-990676633834\") " pod="kube-system/coredns-6f6b679f8f-bbndn" Jan 13 20:32:57.384442 kubelet[2622]: I0113 20:32:57.384324 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4c5301-619b-4fa1-82a3-990676633834-config-volume\") pod \"coredns-6f6b679f8f-bbndn\" (UID: \"3b4c5301-619b-4fa1-82a3-990676633834\") " pod="kube-system/coredns-6f6b679f8f-bbndn" Jan 13 20:32:57.384442 kubelet[2622]: I0113 20:32:57.384346 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bb89f21-ce55-4923-9e56-f940c186059f-config-volume\") pod \"coredns-6f6b679f8f-g4w2v\" (UID: \"8bb89f21-ce55-4923-9e56-f940c186059f\") " pod="kube-system/coredns-6f6b679f8f-g4w2v" Jan 13 20:32:57.605065 containerd[1470]: time="2025-01-13T20:32:57.605010415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbndn,Uid:3b4c5301-619b-4fa1-82a3-990676633834,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:57.613062 containerd[1470]: time="2025-01-13T20:32:57.613023075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g4w2v,Uid:8bb89f21-ce55-4923-9e56-f940c186059f,Namespace:kube-system,Attempt:0,}" Jan 13 20:32:58.000648 kubelet[2622]: I0113 20:32:57.998834 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7twbk" podStartSLOduration=8.707854875 podStartE2EDuration="26.998544397s" podCreationTimestamp="2025-01-13 20:32:31 +0000 UTC" firstStartedPulling="2025-01-13 20:32:33.914467681 +0000 UTC m=+6.285454289" lastFinishedPulling="2025-01-13 20:32:52.205157234 +0000 UTC m=+24.576143811" observedRunningTime="2025-01-13 20:32:57.997822059 +0000 UTC m=+30.368808706" watchObservedRunningTime="2025-01-13 20:32:57.998544397 +0000 UTC m=+30.369531024" Jan 13 20:32:59.282728 systemd-networkd[1374]: cilium_host: Link UP Jan 13 20:32:59.283155 systemd-networkd[1374]: cilium_net: Link UP Jan 13 20:32:59.285700 systemd-networkd[1374]: cilium_net: Gained carrier Jan 13 20:32:59.286157 systemd-networkd[1374]: cilium_host: Gained carrier Jan 13 20:32:59.286467 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jan 13 20:32:59.287957 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jan 13 20:32:59.396734 systemd-networkd[1374]: cilium_vxlan: Link UP Jan 13 20:32:59.396743 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jan 13 20:32:59.702602 kernel: NET: Registered PF_ALG protocol family Jan 13 20:33:00.457796 systemd-networkd[1374]: lxc_health: Link UP Jan 13 20:33:00.465017 systemd-networkd[1374]: lxc_health: Gained carrier Jan 13 20:33:00.678208 systemd-networkd[1374]: lxc97502d51be4c: Link UP Jan 13 20:33:00.684672 kernel: eth0: renamed from tmp51d8a Jan 13 20:33:00.688921 systemd-networkd[1374]: lxc97502d51be4c: Gained carrier Jan 13 20:33:00.710368 systemd-networkd[1374]: lxc7b08b9cd49f7: Link UP Jan 13 20:33:00.717613 kernel: eth0: renamed from tmpa69ee Jan 13 20:33:00.724288 systemd-networkd[1374]: lxc7b08b9cd49f7: Gained carrier Jan 13 20:33:00.804697 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jan 13 20:33:01.764842 systemd-networkd[1374]: lxc7b08b9cd49f7: Gained IPv6LL Jan 13 20:33:02.084792 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 13 20:33:02.212781 systemd-networkd[1374]: lxc97502d51be4c: Gained IPv6LL Jan 13 20:33:05.513621 containerd[1470]: time="2025-01-13T20:33:05.513453150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:33:05.515967 containerd[1470]: time="2025-01-13T20:33:05.514308659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:33:05.515967 containerd[1470]: time="2025-01-13T20:33:05.514361877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:05.515967 containerd[1470]: time="2025-01-13T20:33:05.515574312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:05.552781 systemd[1]: Started cri-containerd-51d8abee36682f97fb955faa094c77675d9fa7f2bf75d9eb9db85a67b6461c7f.scope - libcontainer container 51d8abee36682f97fb955faa094c77675d9fa7f2bf75d9eb9db85a67b6461c7f. Jan 13 20:33:05.592129 containerd[1470]: time="2025-01-13T20:33:05.592011904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:33:05.592129 containerd[1470]: time="2025-01-13T20:33:05.592067097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:33:05.592129 containerd[1470]: time="2025-01-13T20:33:05.592080873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:05.593933 containerd[1470]: time="2025-01-13T20:33:05.593688875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:33:05.623015 systemd[1]: Started cri-containerd-a69ee693d1ab4bfdef9793030347fb186a8906a32e306e6007f28b78945759e8.scope - libcontainer container a69ee693d1ab4bfdef9793030347fb186a8906a32e306e6007f28b78945759e8. Jan 13 20:33:05.646062 containerd[1470]: time="2025-01-13T20:33:05.646025107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bbndn,Uid:3b4c5301-619b-4fa1-82a3-990676633834,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d8abee36682f97fb955faa094c77675d9fa7f2bf75d9eb9db85a67b6461c7f\"" Jan 13 20:33:05.656220 containerd[1470]: time="2025-01-13T20:33:05.655790724Z" level=info msg="CreateContainer within sandbox \"51d8abee36682f97fb955faa094c77675d9fa7f2bf75d9eb9db85a67b6461c7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:33:05.685696 containerd[1470]: time="2025-01-13T20:33:05.685536036Z" level=info msg="CreateContainer within sandbox \"51d8abee36682f97fb955faa094c77675d9fa7f2bf75d9eb9db85a67b6461c7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76d96bdc54536bd0fd1547b0bcea9f5866eb42ed8a7a2f6dfff510b4a20f10fd\"" Jan 13 20:33:05.688594 containerd[1470]: time="2025-01-13T20:33:05.688342578Z" level=info msg="StartContainer for \"76d96bdc54536bd0fd1547b0bcea9f5866eb42ed8a7a2f6dfff510b4a20f10fd\"" Jan 13 20:33:05.708827 containerd[1470]: time="2025-01-13T20:33:05.708761361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g4w2v,Uid:8bb89f21-ce55-4923-9e56-f940c186059f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a69ee693d1ab4bfdef9793030347fb186a8906a32e306e6007f28b78945759e8\"" Jan 13 20:33:05.714582 containerd[1470]: time="2025-01-13T20:33:05.714355749Z" level=info msg="CreateContainer within sandbox \"a69ee693d1ab4bfdef9793030347fb186a8906a32e306e6007f28b78945759e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:33:05.738790 systemd[1]: Started cri-containerd-76d96bdc54536bd0fd1547b0bcea9f5866eb42ed8a7a2f6dfff510b4a20f10fd.scope - libcontainer container 76d96bdc54536bd0fd1547b0bcea9f5866eb42ed8a7a2f6dfff510b4a20f10fd. Jan 13 20:33:05.741733 containerd[1470]: time="2025-01-13T20:33:05.741602403Z" level=info msg="CreateContainer within sandbox \"a69ee693d1ab4bfdef9793030347fb186a8906a32e306e6007f28b78945759e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8648f4c46c9d02efa37153e9b722c2935e66d61277a13f717829f145c3784546\"" Jan 13 20:33:05.743835 containerd[1470]: time="2025-01-13T20:33:05.743668532Z" level=info msg="StartContainer for \"8648f4c46c9d02efa37153e9b722c2935e66d61277a13f717829f145c3784546\"" Jan 13 20:33:05.794880 systemd[1]: Started cri-containerd-8648f4c46c9d02efa37153e9b722c2935e66d61277a13f717829f145c3784546.scope - libcontainer container 8648f4c46c9d02efa37153e9b722c2935e66d61277a13f717829f145c3784546. Jan 13 20:33:05.808279 containerd[1470]: time="2025-01-13T20:33:05.808155736Z" level=info msg="StartContainer for \"76d96bdc54536bd0fd1547b0bcea9f5866eb42ed8a7a2f6dfff510b4a20f10fd\" returns successfully" Jan 13 20:33:05.834653 containerd[1470]: time="2025-01-13T20:33:05.834588772Z" level=info msg="StartContainer for \"8648f4c46c9d02efa37153e9b722c2935e66d61277a13f717829f145c3784546\" returns successfully" Jan 13 20:33:06.015659 kubelet[2622]: I0113 20:33:06.014055 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-g4w2v" podStartSLOduration=34.013999384 podStartE2EDuration="34.013999384s" podCreationTimestamp="2025-01-13 20:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:33:06.007453939 +0000 UTC m=+38.378440566" watchObservedRunningTime="2025-01-13 20:33:06.013999384 +0000 UTC m=+38.384986011" Jan 13 20:33:06.048861 kubelet[2622]: I0113 20:33:06.045593 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bbndn" podStartSLOduration=34.045534183 podStartE2EDuration="34.045534183s" podCreationTimestamp="2025-01-13 20:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:33:06.042367157 +0000 UTC m=+38.413353784" watchObservedRunningTime="2025-01-13 20:33:06.045534183 +0000 UTC m=+38.416520810" Jan 13 20:33:46.820161 systemd[1]: Started sshd@7-172.24.4.206:22-172.24.4.1:53700.service - OpenSSH per-connection server daemon (172.24.4.1:53700). Jan 13 20:33:48.128754 sshd[3992]: Accepted publickey for core from 172.24.4.1 port 53700 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:48.135938 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:48.159465 systemd-logind[1447]: New session 10 of user core. Jan 13 20:33:48.165988 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:33:48.909758 sshd[3994]: Connection closed by 172.24.4.1 port 53700 Jan 13 20:33:48.911114 sshd-session[3992]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:48.918765 systemd[1]: sshd@7-172.24.4.206:22-172.24.4.1:53700.service: Deactivated successfully. Jan 13 20:33:48.923146 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:33:48.925553 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:33:48.928742 systemd-logind[1447]: Removed session 10. Jan 13 20:33:53.935107 systemd[1]: Started sshd@8-172.24.4.206:22-172.24.4.1:48048.service - OpenSSH per-connection server daemon (172.24.4.1:48048). Jan 13 20:33:55.223938 sshd[4006]: Accepted publickey for core from 172.24.4.1 port 48048 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:33:55.226125 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:55.231919 systemd-logind[1447]: New session 11 of user core. Jan 13 20:33:55.245764 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:33:56.353910 sshd[4008]: Connection closed by 172.24.4.1 port 48048 Jan 13 20:33:56.355207 sshd-session[4006]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:56.362051 systemd[1]: sshd@8-172.24.4.206:22-172.24.4.1:48048.service: Deactivated successfully. Jan 13 20:33:56.366833 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:33:56.372493 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:33:56.376120 systemd-logind[1447]: Removed session 11. Jan 13 20:34:01.381074 systemd[1]: Started sshd@9-172.24.4.206:22-172.24.4.1:48050.service - OpenSSH per-connection server daemon (172.24.4.1:48050). Jan 13 20:34:02.642906 sshd[4020]: Accepted publickey for core from 172.24.4.1 port 48050 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:02.645673 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:02.655986 systemd-logind[1447]: New session 12 of user core. Jan 13 20:34:02.665871 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:34:03.496619 sshd[4025]: Connection closed by 172.24.4.1 port 48050 Jan 13 20:34:03.496669 sshd-session[4020]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:03.510598 systemd[1]: sshd@9-172.24.4.206:22-172.24.4.1:48050.service: Deactivated successfully. Jan 13 20:34:03.514457 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:34:03.519123 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:34:03.528181 systemd[1]: Started sshd@10-172.24.4.206:22-172.24.4.1:32966.service - OpenSSH per-connection server daemon (172.24.4.1:32966). Jan 13 20:34:03.531625 systemd-logind[1447]: Removed session 12. Jan 13 20:34:04.994522 sshd[4036]: Accepted publickey for core from 172.24.4.1 port 32966 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:04.997464 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:05.007638 systemd-logind[1447]: New session 13 of user core. Jan 13 20:34:05.018958 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:34:05.908612 sshd[4038]: Connection closed by 172.24.4.1 port 32966 Jan 13 20:34:05.909643 sshd-session[4036]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:05.920854 systemd[1]: sshd@10-172.24.4.206:22-172.24.4.1:32966.service: Deactivated successfully. Jan 13 20:34:05.924287 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:34:05.926218 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:34:05.934767 systemd[1]: Started sshd@11-172.24.4.206:22-172.24.4.1:32978.service - OpenSSH per-connection server daemon (172.24.4.1:32978). Jan 13 20:34:05.938521 systemd-logind[1447]: Removed session 13. Jan 13 20:34:07.224660 sshd[4047]: Accepted publickey for core from 172.24.4.1 port 32978 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:07.227511 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:07.239155 systemd-logind[1447]: New session 14 of user core. Jan 13 20:34:07.246879 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:34:07.981354 sshd[4049]: Connection closed by 172.24.4.1 port 32978 Jan 13 20:34:07.982174 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:07.986398 systemd[1]: sshd@11-172.24.4.206:22-172.24.4.1:32978.service: Deactivated successfully. Jan 13 20:34:07.989888 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:34:07.994500 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:34:07.996691 systemd-logind[1447]: Removed session 14. Jan 13 20:34:13.007345 systemd[1]: Started sshd@12-172.24.4.206:22-172.24.4.1:32982.service - OpenSSH per-connection server daemon (172.24.4.1:32982). Jan 13 20:34:14.459286 sshd[4059]: Accepted publickey for core from 172.24.4.1 port 32982 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:14.462216 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:14.474659 systemd-logind[1447]: New session 15 of user core. Jan 13 20:34:14.481880 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:34:15.217367 sshd[4061]: Connection closed by 172.24.4.1 port 32982 Jan 13 20:34:15.217212 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:15.221854 systemd[1]: sshd@12-172.24.4.206:22-172.24.4.1:32982.service: Deactivated successfully. Jan 13 20:34:15.225113 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:34:15.229175 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:34:15.232118 systemd-logind[1447]: Removed session 15. Jan 13 20:34:20.240098 systemd[1]: Started sshd@13-172.24.4.206:22-172.24.4.1:42846.service - OpenSSH per-connection server daemon (172.24.4.1:42846). Jan 13 20:34:21.437604 sshd[4072]: Accepted publickey for core from 172.24.4.1 port 42846 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:21.440645 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:21.451626 systemd-logind[1447]: New session 16 of user core. Jan 13 20:34:21.458869 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:34:22.206669 sshd[4074]: Connection closed by 172.24.4.1 port 42846 Jan 13 20:34:22.207360 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:22.222309 systemd[1]: sshd@13-172.24.4.206:22-172.24.4.1:42846.service: Deactivated successfully. Jan 13 20:34:22.228359 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:34:22.230684 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:34:22.245177 systemd[1]: Started sshd@14-172.24.4.206:22-172.24.4.1:42848.service - OpenSSH per-connection server daemon (172.24.4.1:42848). Jan 13 20:34:22.247972 systemd-logind[1447]: Removed session 16. Jan 13 20:34:23.464737 sshd[4086]: Accepted publickey for core from 172.24.4.1 port 42848 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:23.467311 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:23.478090 systemd-logind[1447]: New session 17 of user core. Jan 13 20:34:23.487894 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:34:24.334608 sshd[4088]: Connection closed by 172.24.4.1 port 42848 Jan 13 20:34:24.335140 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:24.347812 systemd[1]: sshd@14-172.24.4.206:22-172.24.4.1:42848.service: Deactivated successfully. Jan 13 20:34:24.353863 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:34:24.357615 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:34:24.366130 systemd[1]: Started sshd@15-172.24.4.206:22-172.24.4.1:37012.service - OpenSSH per-connection server daemon (172.24.4.1:37012). Jan 13 20:34:24.370211 systemd-logind[1447]: Removed session 17. Jan 13 20:34:26.474869 sshd[4097]: Accepted publickey for core from 172.24.4.1 port 37012 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:26.477615 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:26.489188 systemd-logind[1447]: New session 18 of user core. Jan 13 20:34:26.495904 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:34:29.047157 sshd[4099]: Connection closed by 172.24.4.1 port 37012 Jan 13 20:34:29.047259 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:29.053922 systemd[1]: sshd@15-172.24.4.206:22-172.24.4.1:37012.service: Deactivated successfully. Jan 13 20:34:29.057914 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:34:29.060320 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:34:29.071179 systemd[1]: Started sshd@16-172.24.4.206:22-172.24.4.1:37014.service - OpenSSH per-connection server daemon (172.24.4.1:37014). Jan 13 20:34:29.075094 systemd-logind[1447]: Removed session 18. Jan 13 20:34:30.118266 sshd[4118]: Accepted publickey for core from 172.24.4.1 port 37014 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:30.120985 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:30.133022 systemd-logind[1447]: New session 19 of user core. Jan 13 20:34:30.139880 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:34:31.215667 sshd[4120]: Connection closed by 172.24.4.1 port 37014 Jan 13 20:34:31.220109 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:31.231144 systemd[1]: sshd@16-172.24.4.206:22-172.24.4.1:37014.service: Deactivated successfully. Jan 13 20:34:31.236235 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:34:31.241185 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:34:31.247297 systemd[1]: Started sshd@17-172.24.4.206:22-172.24.4.1:37030.service - OpenSSH per-connection server daemon (172.24.4.1:37030). Jan 13 20:34:31.252322 systemd-logind[1447]: Removed session 19. Jan 13 20:34:32.470546 sshd[4128]: Accepted publickey for core from 172.24.4.1 port 37030 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:32.473398 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:32.484708 systemd-logind[1447]: New session 20 of user core. Jan 13 20:34:32.492918 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:34:33.107620 sshd[4130]: Connection closed by 172.24.4.1 port 37030 Jan 13 20:34:33.107013 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:33.114429 systemd[1]: sshd@17-172.24.4.206:22-172.24.4.1:37030.service: Deactivated successfully. Jan 13 20:34:33.120310 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:34:33.125306 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:34:33.128096 systemd-logind[1447]: Removed session 20. Jan 13 20:34:38.130483 systemd[1]: Started sshd@18-172.24.4.206:22-172.24.4.1:49284.service - OpenSSH per-connection server daemon (172.24.4.1:49284). Jan 13 20:34:39.767522 sshd[4146]: Accepted publickey for core from 172.24.4.1 port 49284 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:39.770300 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:39.783933 systemd-logind[1447]: New session 21 of user core. Jan 13 20:34:39.792904 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:34:40.538032 sshd[4148]: Connection closed by 172.24.4.1 port 49284 Jan 13 20:34:40.538783 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:40.550433 systemd[1]: sshd@18-172.24.4.206:22-172.24.4.1:49284.service: Deactivated successfully. Jan 13 20:34:40.561696 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:34:40.565056 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:34:40.567394 systemd-logind[1447]: Removed session 21. Jan 13 20:34:45.562092 systemd[1]: Started sshd@19-172.24.4.206:22-172.24.4.1:47446.service - OpenSSH per-connection server daemon (172.24.4.1:47446). Jan 13 20:34:46.965682 sshd[4158]: Accepted publickey for core from 172.24.4.1 port 47446 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:46.969434 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:46.981345 systemd-logind[1447]: New session 22 of user core. Jan 13 20:34:46.996983 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:34:47.707375 sshd[4160]: Connection closed by 172.24.4.1 port 47446 Jan 13 20:34:47.705838 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:47.725930 systemd[1]: sshd@19-172.24.4.206:22-172.24.4.1:47446.service: Deactivated successfully. Jan 13 20:34:47.730876 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:34:47.733972 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:34:47.743226 systemd[1]: Started sshd@20-172.24.4.206:22-172.24.4.1:47448.service - OpenSSH per-connection server daemon (172.24.4.1:47448). Jan 13 20:34:47.746234 systemd-logind[1447]: Removed session 22. Jan 13 20:34:48.956639 sshd[4171]: Accepted publickey for core from 172.24.4.1 port 47448 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:48.959204 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:48.968713 systemd-logind[1447]: New session 23 of user core. Jan 13 20:34:48.977936 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:34:50.915239 containerd[1470]: time="2025-01-13T20:34:50.914044087Z" level=info msg="StopContainer for \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\" with timeout 30 (s)" Jan 13 20:34:50.923721 containerd[1470]: time="2025-01-13T20:34:50.923524824Z" level=info msg="Stop container \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\" with signal terminated" Jan 13 20:34:50.927998 systemd[1]: run-containerd-runc-k8s.io-a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b-runc.oMoUwm.mount: Deactivated successfully. Jan 13 20:34:50.944840 systemd[1]: cri-containerd-39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64.scope: Deactivated successfully. Jan 13 20:34:50.948841 containerd[1470]: time="2025-01-13T20:34:50.948760569Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:34:50.962262 containerd[1470]: time="2025-01-13T20:34:50.962229244Z" level=info msg="StopContainer for \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\" with timeout 2 (s)" Jan 13 20:34:50.962865 containerd[1470]: time="2025-01-13T20:34:50.962793138Z" level=info msg="Stop container \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\" with signal terminated" Jan 13 20:34:50.974708 systemd-networkd[1374]: lxc_health: Link DOWN Jan 13 20:34:50.974715 systemd-networkd[1374]: lxc_health: Lost carrier Jan 13 20:34:50.981916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64-rootfs.mount: Deactivated successfully. Jan 13 20:34:50.996345 systemd[1]: cri-containerd-a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b.scope: Deactivated successfully. Jan 13 20:34:50.996924 systemd[1]: cri-containerd-a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b.scope: Consumed 8.689s CPU time. Jan 13 20:34:51.013649 containerd[1470]: time="2025-01-13T20:34:51.013189796Z" level=info msg="shim disconnected" id=39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64 namespace=k8s.io Jan 13 20:34:51.013649 containerd[1470]: time="2025-01-13T20:34:51.013642400Z" level=warning msg="cleaning up after shim disconnected" id=39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64 namespace=k8s.io Jan 13 20:34:51.013649 containerd[1470]: time="2025-01-13T20:34:51.013655003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:51.027363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b-rootfs.mount: Deactivated successfully. Jan 13 20:34:51.036331 containerd[1470]: time="2025-01-13T20:34:51.036281480Z" level=info msg="shim disconnected" id=a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b namespace=k8s.io Jan 13 20:34:51.036647 containerd[1470]: time="2025-01-13T20:34:51.036495353Z" level=warning msg="cleaning up after shim disconnected" id=a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b namespace=k8s.io Jan 13 20:34:51.036647 containerd[1470]: time="2025-01-13T20:34:51.036512295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:51.043044 containerd[1470]: time="2025-01-13T20:34:51.042898890Z" level=info msg="StopContainer for \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\" returns successfully" Jan 13 20:34:51.043834 containerd[1470]: time="2025-01-13T20:34:51.043691435Z" level=info msg="StopPodSandbox for \"8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10\"" Jan 13 20:34:51.043834 containerd[1470]: time="2025-01-13T20:34:51.043725239Z" level=info msg="Container to stop \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.047031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10-shm.mount: Deactivated successfully. Jan 13 20:34:51.054178 systemd[1]: cri-containerd-8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10.scope: Deactivated successfully. Jan 13 20:34:51.073522 containerd[1470]: time="2025-01-13T20:34:51.073489246Z" level=info msg="StopContainer for \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\" returns successfully" Jan 13 20:34:51.075310 containerd[1470]: time="2025-01-13T20:34:51.075276099Z" level=info msg="StopPodSandbox for \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\"" Jan 13 20:34:51.075511 containerd[1470]: time="2025-01-13T20:34:51.075450929Z" level=info msg="Container to stop \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.075511 containerd[1470]: time="2025-01-13T20:34:51.075498018Z" level=info msg="Container to stop \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.075638 containerd[1470]: time="2025-01-13T20:34:51.075509711Z" level=info msg="Container to stop \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.075638 containerd[1470]: time="2025-01-13T20:34:51.075521803Z" level=info msg="Container to stop \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.075638 containerd[1470]: time="2025-01-13T20:34:51.075532303Z" level=info msg="Container to stop \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:34:51.082088 systemd[1]: cri-containerd-bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af.scope: Deactivated successfully. Jan 13 20:34:51.101673 containerd[1470]: time="2025-01-13T20:34:51.101455541Z" level=info msg="shim disconnected" id=8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10 namespace=k8s.io Jan 13 20:34:51.101673 containerd[1470]: time="2025-01-13T20:34:51.101512588Z" level=warning msg="cleaning up after shim disconnected" id=8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10 namespace=k8s.io Jan 13 20:34:51.101673 containerd[1470]: time="2025-01-13T20:34:51.101522617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:51.116633 containerd[1470]: time="2025-01-13T20:34:51.116546314Z" level=info msg="shim disconnected" id=bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af namespace=k8s.io Jan 13 20:34:51.116633 containerd[1470]: time="2025-01-13T20:34:51.116625403Z" level=warning msg="cleaning up after shim disconnected" id=bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af namespace=k8s.io Jan 13 20:34:51.116633 containerd[1470]: time="2025-01-13T20:34:51.116636664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:51.119805 containerd[1470]: time="2025-01-13T20:34:51.119762273Z" level=info msg="TearDown network for sandbox \"8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10\" successfully" Jan 13 20:34:51.119805 containerd[1470]: time="2025-01-13T20:34:51.119793352Z" level=info msg="StopPodSandbox for \"8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10\" returns successfully" Jan 13 20:34:51.139392 containerd[1470]: time="2025-01-13T20:34:51.139264012Z" level=info msg="TearDown network for sandbox \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" successfully" Jan 13 20:34:51.139392 containerd[1470]: time="2025-01-13T20:34:51.139299118Z" level=info msg="StopPodSandbox for \"bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af\" returns successfully" Jan 13 20:34:51.215278 kubelet[2622]: I0113 20:34:51.214215 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-bpf-maps\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.215278 kubelet[2622]: I0113 20:34:51.214260 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-hubble-tls\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.215278 kubelet[2622]: I0113 20:34:51.214277 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-run\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.215278 kubelet[2622]: I0113 20:34:51.214297 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-lib-modules\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.215278 kubelet[2622]: I0113 20:34:51.214316 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl5hs\" (UniqueName: \"kubernetes.io/projected/cc867dce-fdb3-46b8-a8c8-7ee7973687bf-kube-api-access-wl5hs\") pod \"cc867dce-fdb3-46b8-a8c8-7ee7973687bf\" (UID: \"cc867dce-fdb3-46b8-a8c8-7ee7973687bf\") " Jan 13 20:34:51.215278 kubelet[2622]: I0113 20:34:51.214335 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-host-proc-sys-kernel\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.215967 kubelet[2622]: I0113 20:34:51.214325 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.215967 kubelet[2622]: I0113 20:34:51.214350 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-cgroup\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.215967 kubelet[2622]: I0113 20:34:51.214366 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cni-path\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.215967 kubelet[2622]: I0113 20:34:51.214375 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.215967 kubelet[2622]: I0113 20:34:51.214387 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhdhh\" (UniqueName: \"kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-kube-api-access-jhdhh\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.215967 kubelet[2622]: I0113 20:34:51.214404 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-xtables-lock\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.216219 kubelet[2622]: I0113 20:34:51.214425 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67cbd5cf-a245-4e2c-8a84-51926d16224d-clustermesh-secrets\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.216219 kubelet[2622]: I0113 20:34:51.214454 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-host-proc-sys-net\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.216219 kubelet[2622]: I0113 20:34:51.214470 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-etc-cni-netd\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.216219 kubelet[2622]: I0113 20:34:51.214489 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc867dce-fdb3-46b8-a8c8-7ee7973687bf-cilium-config-path\") pod \"cc867dce-fdb3-46b8-a8c8-7ee7973687bf\" (UID: \"cc867dce-fdb3-46b8-a8c8-7ee7973687bf\") " Jan 13 20:34:51.216219 kubelet[2622]: I0113 20:34:51.214504 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-hostproc\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.216219 kubelet[2622]: I0113 20:34:51.214522 2622 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-config-path\") pod \"67cbd5cf-a245-4e2c-8a84-51926d16224d\" (UID: \"67cbd5cf-a245-4e2c-8a84-51926d16224d\") " Jan 13 20:34:51.216455 kubelet[2622]: I0113 20:34:51.214552 2622 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-lib-modules\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.216455 kubelet[2622]: I0113 20:34:51.214577 2622 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-bpf-maps\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.218978 kubelet[2622]: I0113 20:34:51.217218 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.219417 kubelet[2622]: I0113 20:34:51.219396 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:34:51.219507 kubelet[2622]: I0113 20:34:51.219493 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.219642 kubelet[2622]: I0113 20:34:51.219611 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.219744 kubelet[2622]: I0113 20:34:51.219729 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.219830 kubelet[2622]: I0113 20:34:51.219816 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cni-path" (OuterVolumeSpecName: "cni-path") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.222318 kubelet[2622]: I0113 20:34:51.222282 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.222378 kubelet[2622]: I0113 20:34:51.222344 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.222545 kubelet[2622]: I0113 20:34:51.222523 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:34:51.222871 kubelet[2622]: I0113 20:34:51.222794 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc867dce-fdb3-46b8-a8c8-7ee7973687bf-kube-api-access-wl5hs" (OuterVolumeSpecName: "kube-api-access-wl5hs") pod "cc867dce-fdb3-46b8-a8c8-7ee7973687bf" (UID: "cc867dce-fdb3-46b8-a8c8-7ee7973687bf"). InnerVolumeSpecName "kube-api-access-wl5hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:34:51.222950 kubelet[2622]: I0113 20:34:51.222823 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-hostproc" (OuterVolumeSpecName: "hostproc") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:34:51.223324 kubelet[2622]: I0113 20:34:51.223302 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-kube-api-access-jhdhh" (OuterVolumeSpecName: "kube-api-access-jhdhh") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "kube-api-access-jhdhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:34:51.223870 kubelet[2622]: I0113 20:34:51.223849 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cbd5cf-a245-4e2c-8a84-51926d16224d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "67cbd5cf-a245-4e2c-8a84-51926d16224d" (UID: "67cbd5cf-a245-4e2c-8a84-51926d16224d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:34:51.225637 kubelet[2622]: I0113 20:34:51.225604 2622 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc867dce-fdb3-46b8-a8c8-7ee7973687bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cc867dce-fdb3-46b8-a8c8-7ee7973687bf" (UID: "cc867dce-fdb3-46b8-a8c8-7ee7973687bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:34:51.313227 kubelet[2622]: I0113 20:34:51.312602 2622 scope.go:117] "RemoveContainer" containerID="a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b" Jan 13 20:34:51.320441 containerd[1470]: time="2025-01-13T20:34:51.318197917Z" level=info msg="RemoveContainer for \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\"" Jan 13 20:34:51.320969 kubelet[2622]: I0113 20:34:51.317673 2622 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wl5hs\" (UniqueName: \"kubernetes.io/projected/cc867dce-fdb3-46b8-a8c8-7ee7973687bf-kube-api-access-wl5hs\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.321494 kubelet[2622]: I0113 20:34:51.321351 2622 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-host-proc-sys-kernel\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.325620 kubelet[2622]: I0113 20:34:51.325528 2622 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-cgroup\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.331852 kubelet[2622]: I0113 20:34:51.325752 2622 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cni-path\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.331852 kubelet[2622]: I0113 20:34:51.325786 2622 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jhdhh\" (UniqueName: \"kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-kube-api-access-jhdhh\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.331852 kubelet[2622]: I0113 20:34:51.325812 2622 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-xtables-lock\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.331852 kubelet[2622]: I0113 20:34:51.325841 2622 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-hostproc\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.331852 kubelet[2622]: I0113 20:34:51.325868 2622 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67cbd5cf-a245-4e2c-8a84-51926d16224d-clustermesh-secrets\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.331852 kubelet[2622]: I0113 20:34:51.325891 2622 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-host-proc-sys-net\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.331852 kubelet[2622]: I0113 20:34:51.325915 2622 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-etc-cni-netd\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.332459 kubelet[2622]: I0113 20:34:51.325939 2622 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc867dce-fdb3-46b8-a8c8-7ee7973687bf-cilium-config-path\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.332459 kubelet[2622]: I0113 20:34:51.325961 2622 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-config-path\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.332459 kubelet[2622]: I0113 20:34:51.325985 2622 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67cbd5cf-a245-4e2c-8a84-51926d16224d-hubble-tls\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.332459 kubelet[2622]: I0113 20:34:51.326009 2622 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67cbd5cf-a245-4e2c-8a84-51926d16224d-cilium-run\") on node \"ci-4186-1-0-8-e51fb1a5ac.novalocal\" DevicePath \"\"" Jan 13 20:34:51.331980 systemd[1]: Removed slice kubepods-burstable-pod67cbd5cf_a245_4e2c_8a84_51926d16224d.slice - libcontainer container kubepods-burstable-pod67cbd5cf_a245_4e2c_8a84_51926d16224d.slice. Jan 13 20:34:51.332208 systemd[1]: kubepods-burstable-pod67cbd5cf_a245_4e2c_8a84_51926d16224d.slice: Consumed 8.772s CPU time. Jan 13 20:34:51.355719 containerd[1470]: time="2025-01-13T20:34:51.354088107Z" level=info msg="RemoveContainer for \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\" returns successfully" Jan 13 20:34:51.354375 systemd[1]: Removed slice kubepods-besteffort-podcc867dce_fdb3_46b8_a8c8_7ee7973687bf.slice - libcontainer container kubepods-besteffort-podcc867dce_fdb3_46b8_a8c8_7ee7973687bf.slice. Jan 13 20:34:51.359468 kubelet[2622]: I0113 20:34:51.358350 2622 scope.go:117] "RemoveContainer" containerID="fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b" Jan 13 20:34:51.362907 containerd[1470]: time="2025-01-13T20:34:51.362847460Z" level=info msg="RemoveContainer for \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\"" Jan 13 20:34:51.371822 containerd[1470]: time="2025-01-13T20:34:51.371760944Z" level=info msg="RemoveContainer for \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\" returns successfully" Jan 13 20:34:51.372482 kubelet[2622]: I0113 20:34:51.372397 2622 scope.go:117] "RemoveContainer" containerID="63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075" Jan 13 20:34:51.375709 containerd[1470]: time="2025-01-13T20:34:51.375653250Z" level=info msg="RemoveContainer for \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\"" Jan 13 20:34:51.384432 containerd[1470]: time="2025-01-13T20:34:51.384361326Z" level=info msg="RemoveContainer for \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\" returns successfully" Jan 13 20:34:51.385042 kubelet[2622]: I0113 20:34:51.384988 2622 scope.go:117] "RemoveContainer" containerID="f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335" Jan 13 20:34:51.386477 containerd[1470]: time="2025-01-13T20:34:51.386388302Z" level=info msg="RemoveContainer for \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\"" Jan 13 20:34:51.437629 containerd[1470]: time="2025-01-13T20:34:51.437153148Z" level=info msg="RemoveContainer for \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\" returns successfully" Jan 13 20:34:51.439822 kubelet[2622]: I0113 20:34:51.439122 2622 scope.go:117] "RemoveContainer" containerID="491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562" Jan 13 20:34:51.443306 containerd[1470]: time="2025-01-13T20:34:51.442772253Z" level=info msg="RemoveContainer for \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\"" Jan 13 20:34:51.553217 containerd[1470]: time="2025-01-13T20:34:51.553143140Z" level=info msg="RemoveContainer for \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\" returns successfully" Jan 13 20:34:51.554038 kubelet[2622]: I0113 20:34:51.553618 2622 scope.go:117] "RemoveContainer" containerID="a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b" Jan 13 20:34:51.554927 containerd[1470]: time="2025-01-13T20:34:51.554746847Z" level=error msg="ContainerStatus for \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\": not found" Jan 13 20:34:51.555200 kubelet[2622]: E0113 20:34:51.555063 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\": not found" containerID="a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b" Jan 13 20:34:51.555291 kubelet[2622]: I0113 20:34:51.555130 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b"} err="failed to get container status \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a578ef5c6fc6d0a7394c2073c5baf8cd7396b78b8826b987ebef1e9870aa2e9b\": not found" Jan 13 20:34:51.555358 kubelet[2622]: I0113 20:34:51.555297 2622 scope.go:117] "RemoveContainer" containerID="fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b" Jan 13 20:34:51.555913 containerd[1470]: time="2025-01-13T20:34:51.555837255Z" level=error msg="ContainerStatus for \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\": not found" Jan 13 20:34:51.556643 kubelet[2622]: E0113 20:34:51.556368 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\": not found" containerID="fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b" Jan 13 20:34:51.556643 kubelet[2622]: I0113 20:34:51.556431 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b"} err="failed to get container status \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbfe8031cb98ed8e8a38ae3206233fc3eec804c2043ce687651f55ab3689852b\": not found" Jan 13 20:34:51.556643 kubelet[2622]: I0113 20:34:51.556477 2622 scope.go:117] "RemoveContainer" containerID="63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075" Jan 13 20:34:51.557639 containerd[1470]: time="2025-01-13T20:34:51.557078147Z" level=error msg="ContainerStatus for \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\": not found" Jan 13 20:34:51.557765 kubelet[2622]: E0113 20:34:51.557403 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\": not found" containerID="63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075" Jan 13 20:34:51.557765 kubelet[2622]: I0113 20:34:51.557447 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075"} err="failed to get container status \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\": rpc error: code = NotFound desc = an error occurred when try to find container \"63b36b50b4ad88e555e82968319be091fda694785c9339855f5598fa8df71075\": not found" Jan 13 20:34:51.557765 kubelet[2622]: I0113 20:34:51.557481 2622 scope.go:117] "RemoveContainer" containerID="f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335" Jan 13 20:34:51.558945 containerd[1470]: time="2025-01-13T20:34:51.558640005Z" level=error msg="ContainerStatus for \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\": not found" Jan 13 20:34:51.559137 kubelet[2622]: E0113 20:34:51.559081 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\": not found" containerID="f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335" Jan 13 20:34:51.559228 kubelet[2622]: I0113 20:34:51.559132 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335"} err="failed to get container status \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\": rpc error: code = NotFound desc = an error occurred when try to find container \"f16f6621a201e761dde3c35dfe504ff7aa6804b1d1488b88f3a5957d7bb7b335\": not found" Jan 13 20:34:51.559228 kubelet[2622]: I0113 20:34:51.559171 2622 scope.go:117] "RemoveContainer" containerID="491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562" Jan 13 20:34:51.559766 containerd[1470]: time="2025-01-13T20:34:51.559633561Z" level=error msg="ContainerStatus for \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\": not found" Jan 13 20:34:51.560257 kubelet[2622]: E0113 20:34:51.560114 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\": not found" containerID="491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562" Jan 13 20:34:51.560364 kubelet[2622]: I0113 20:34:51.560235 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562"} err="failed to get container status \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\": rpc error: code = NotFound desc = an error occurred when try to find container \"491166d92287f4b566833cc17ce7b202aba40c93935c353a25518730eb3d4562\": not found" Jan 13 20:34:51.560364 kubelet[2622]: I0113 20:34:51.560313 2622 scope.go:117] "RemoveContainer" containerID="39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64" Jan 13 20:34:51.563376 containerd[1470]: time="2025-01-13T20:34:51.563155929Z" level=info msg="RemoveContainer for \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\"" Jan 13 20:34:51.582741 containerd[1470]: time="2025-01-13T20:34:51.582546476Z" level=info msg="RemoveContainer for \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\" returns successfully" Jan 13 20:34:51.583642 kubelet[2622]: I0113 20:34:51.583545 2622 scope.go:117] "RemoveContainer" containerID="39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64" Jan 13 20:34:51.584185 containerd[1470]: time="2025-01-13T20:34:51.584092214Z" level=error msg="ContainerStatus for \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\": not found" Jan 13 20:34:51.584605 kubelet[2622]: E0113 20:34:51.584509 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\": not found" containerID="39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64" Jan 13 20:34:51.584718 kubelet[2622]: I0113 20:34:51.584604 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64"} err="failed to get container status \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\": rpc error: code = NotFound desc = an error occurred when try to find container \"39213af031c1488bbf2af6aec38b06214651bf70e8a2cfb120f4942a4bf18e64\": not found" Jan 13 20:34:51.788847 kubelet[2622]: I0113 20:34:51.787769 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67cbd5cf-a245-4e2c-8a84-51926d16224d" path="/var/lib/kubelet/pods/67cbd5cf-a245-4e2c-8a84-51926d16224d/volumes" Jan 13 20:34:51.789927 kubelet[2622]: I0113 20:34:51.789860 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc867dce-fdb3-46b8-a8c8-7ee7973687bf" path="/var/lib/kubelet/pods/cc867dce-fdb3-46b8-a8c8-7ee7973687bf/volumes" Jan 13 20:34:51.923188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af-rootfs.mount: Deactivated successfully. Jan 13 20:34:51.924881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb51f95e62832380d30bfa98c1dfe35fbde93805e6c3dba9536f3838486387af-shm.mount: Deactivated successfully. Jan 13 20:34:51.925070 systemd[1]: var-lib-kubelet-pods-67cbd5cf\x2da245\x2d4e2c\x2d8a84\x2d51926d16224d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:34:51.925240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dea176cb9715f02bdee61e75d262fb972f05d7beef5b50620fc9e36d9b06a10-rootfs.mount: Deactivated successfully. Jan 13 20:34:51.925397 systemd[1]: var-lib-kubelet-pods-67cbd5cf\x2da245\x2d4e2c\x2d8a84\x2d51926d16224d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:34:51.926350 systemd[1]: var-lib-kubelet-pods-cc867dce\x2dfdb3\x2d46b8\x2da8c8\x2d7ee7973687bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwl5hs.mount: Deactivated successfully. Jan 13 20:34:51.926626 systemd[1]: var-lib-kubelet-pods-67cbd5cf\x2da245\x2d4e2c\x2d8a84\x2d51926d16224d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djhdhh.mount: Deactivated successfully. Jan 13 20:34:52.888035 kubelet[2622]: E0113 20:34:52.887950 2622 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:34:52.959120 sshd[4173]: Connection closed by 172.24.4.1 port 47448 Jan 13 20:34:52.959913 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:52.977912 systemd[1]: sshd@20-172.24.4.206:22-172.24.4.1:47448.service: Deactivated successfully. Jan 13 20:34:52.983930 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:34:52.986356 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:34:53.000263 systemd[1]: Started sshd@21-172.24.4.206:22-172.24.4.1:47456.service - OpenSSH per-connection server daemon (172.24.4.1:47456). Jan 13 20:34:53.003280 systemd-logind[1447]: Removed session 23. Jan 13 20:34:54.209975 sshd[4334]: Accepted publickey for core from 172.24.4.1 port 47456 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:54.212805 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:54.223641 systemd-logind[1447]: New session 24 of user core. Jan 13 20:34:54.235917 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:34:55.595602 kubelet[2622]: E0113 20:34:55.592540 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67cbd5cf-a245-4e2c-8a84-51926d16224d" containerName="mount-cgroup" Jan 13 20:34:55.595602 kubelet[2622]: E0113 20:34:55.592588 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67cbd5cf-a245-4e2c-8a84-51926d16224d" containerName="mount-bpf-fs" Jan 13 20:34:55.595602 kubelet[2622]: E0113 20:34:55.592595 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67cbd5cf-a245-4e2c-8a84-51926d16224d" containerName="clean-cilium-state" Jan 13 20:34:55.595602 kubelet[2622]: E0113 20:34:55.592602 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67cbd5cf-a245-4e2c-8a84-51926d16224d" containerName="cilium-agent" Jan 13 20:34:55.595602 kubelet[2622]: E0113 20:34:55.592609 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc867dce-fdb3-46b8-a8c8-7ee7973687bf" containerName="cilium-operator" Jan 13 20:34:55.595602 kubelet[2622]: E0113 20:34:55.592615 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67cbd5cf-a245-4e2c-8a84-51926d16224d" containerName="apply-sysctl-overwrites" Jan 13 20:34:55.595602 kubelet[2622]: I0113 20:34:55.592640 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc867dce-fdb3-46b8-a8c8-7ee7973687bf" containerName="cilium-operator" Jan 13 20:34:55.595602 kubelet[2622]: I0113 20:34:55.592646 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cbd5cf-a245-4e2c-8a84-51926d16224d" containerName="cilium-agent" Jan 13 20:34:55.604064 kubelet[2622]: W0113 20:34:55.602977 2622 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object Jan 13 20:34:55.604064 kubelet[2622]: W0113 20:34:55.602986 2622 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object Jan 13 20:34:55.604064 kubelet[2622]: E0113 20:34:55.603018 2622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object" logger="UnhandledError" Jan 13 20:34:55.604064 kubelet[2622]: E0113 20:34:55.603031 2622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object" logger="UnhandledError" Jan 13 20:34:55.605859 kubelet[2622]: W0113 20:34:55.603066 2622 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object Jan 13 20:34:55.605859 kubelet[2622]: E0113 20:34:55.603079 2622 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4186-1-0-8-e51fb1a5ac.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186-1-0-8-e51fb1a5ac.novalocal' and this object" logger="UnhandledError" Jan 13 20:34:55.611283 systemd[1]: Created slice kubepods-burstable-pod16002e12_bc6e_447e_b194_fc9b90a93138.slice - libcontainer container kubepods-burstable-pod16002e12_bc6e_447e_b194_fc9b90a93138.slice. Jan 13 20:34:55.655584 kubelet[2622]: I0113 20:34:55.655337 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-host-proc-sys-net\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655584 kubelet[2622]: I0113 20:34:55.655384 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-host-proc-sys-kernel\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655584 kubelet[2622]: I0113 20:34:55.655407 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnlr\" (UniqueName: \"kubernetes.io/projected/16002e12-bc6e-447e-b194-fc9b90a93138-kube-api-access-6nnlr\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655584 kubelet[2622]: I0113 20:34:55.655427 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-etc-cni-netd\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655584 kubelet[2622]: I0113 20:34:55.655448 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16002e12-bc6e-447e-b194-fc9b90a93138-hubble-tls\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655584 kubelet[2622]: I0113 20:34:55.655467 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-lib-modules\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655866 kubelet[2622]: I0113 20:34:55.655484 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-bpf-maps\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655866 kubelet[2622]: I0113 20:34:55.655501 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/16002e12-bc6e-447e-b194-fc9b90a93138-cilium-ipsec-secrets\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655866 kubelet[2622]: I0113 20:34:55.655521 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-cilium-cgroup\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655866 kubelet[2622]: I0113 20:34:55.655540 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16002e12-bc6e-447e-b194-fc9b90a93138-cilium-config-path\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655866 kubelet[2622]: I0113 20:34:55.655604 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-cilium-run\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.655866 kubelet[2622]: I0113 20:34:55.655656 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-hostproc\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.656020 kubelet[2622]: I0113 20:34:55.655679 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-cni-path\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.656020 kubelet[2622]: I0113 20:34:55.655704 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16002e12-bc6e-447e-b194-fc9b90a93138-clustermesh-secrets\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.656020 kubelet[2622]: I0113 20:34:55.655747 2622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16002e12-bc6e-447e-b194-fc9b90a93138-xtables-lock\") pod \"cilium-jxfdx\" (UID: \"16002e12-bc6e-447e-b194-fc9b90a93138\") " pod="kube-system/cilium-jxfdx" Jan 13 20:34:55.736168 sshd[4336]: Connection closed by 172.24.4.1 port 47456 Jan 13 20:34:55.737053 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:55.750099 systemd[1]: sshd@21-172.24.4.206:22-172.24.4.1:47456.service: Deactivated successfully. Jan 13 20:34:55.755296 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:34:55.760212 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:34:55.770771 systemd[1]: Started sshd@22-172.24.4.206:22-172.24.4.1:57236.service - OpenSSH per-connection server daemon (172.24.4.1:57236). Jan 13 20:34:55.778160 systemd-logind[1447]: Removed session 24. Jan 13 20:34:56.758521 kubelet[2622]: E0113 20:34:56.757605 2622 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 20:34:56.758521 kubelet[2622]: E0113 20:34:56.757836 2622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16002e12-bc6e-447e-b194-fc9b90a93138-cilium-ipsec-secrets podName:16002e12-bc6e-447e-b194-fc9b90a93138 nodeName:}" failed. No retries permitted until 2025-01-13 20:34:57.257798727 +0000 UTC m=+149.628785364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/16002e12-bc6e-447e-b194-fc9b90a93138-cilium-ipsec-secrets") pod "cilium-jxfdx" (UID: "16002e12-bc6e-447e-b194-fc9b90a93138") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:34:56.758521 kubelet[2622]: E0113 20:34:56.758302 2622 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 20:34:56.758521 kubelet[2622]: E0113 20:34:56.758373 2622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16002e12-bc6e-447e-b194-fc9b90a93138-clustermesh-secrets podName:16002e12-bc6e-447e-b194-fc9b90a93138 nodeName:}" failed. No retries permitted until 2025-01-13 20:34:57.258349556 +0000 UTC m=+149.629336183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/16002e12-bc6e-447e-b194-fc9b90a93138-clustermesh-secrets") pod "cilium-jxfdx" (UID: "16002e12-bc6e-447e-b194-fc9b90a93138") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:34:57.416965 containerd[1470]: time="2025-01-13T20:34:57.416815189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxfdx,Uid:16002e12-bc6e-447e-b194-fc9b90a93138,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:57.421143 sshd[4346]: Accepted publickey for core from 172.24.4.1 port 57236 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:57.426803 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:57.449348 systemd-logind[1447]: New session 25 of user core. Jan 13 20:34:57.468912 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:34:57.488759 containerd[1470]: time="2025-01-13T20:34:57.488661225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:57.488759 containerd[1470]: time="2025-01-13T20:34:57.488716940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:57.489014 containerd[1470]: time="2025-01-13T20:34:57.488736597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:57.489128 containerd[1470]: time="2025-01-13T20:34:57.488973284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:57.514735 systemd[1]: Started cri-containerd-82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784.scope - libcontainer container 82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784. Jan 13 20:34:57.537246 containerd[1470]: time="2025-01-13T20:34:57.537111329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxfdx,Uid:16002e12-bc6e-447e-b194-fc9b90a93138,Namespace:kube-system,Attempt:0,} returns sandbox id \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\"" Jan 13 20:34:57.540038 containerd[1470]: time="2025-01-13T20:34:57.539914226Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:34:57.553529 containerd[1470]: time="2025-01-13T20:34:57.553491036Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9701495167f3b68b03537f7568abe1ba9cff06e7b72d398de336afb8d85ee604\"" Jan 13 20:34:57.555094 containerd[1470]: time="2025-01-13T20:34:57.554246071Z" level=info msg="StartContainer for \"9701495167f3b68b03537f7568abe1ba9cff06e7b72d398de336afb8d85ee604\"" Jan 13 20:34:57.581717 systemd[1]: Started cri-containerd-9701495167f3b68b03537f7568abe1ba9cff06e7b72d398de336afb8d85ee604.scope - libcontainer container 9701495167f3b68b03537f7568abe1ba9cff06e7b72d398de336afb8d85ee604. Jan 13 20:34:57.607600 containerd[1470]: time="2025-01-13T20:34:57.607410447Z" level=info msg="StartContainer for \"9701495167f3b68b03537f7568abe1ba9cff06e7b72d398de336afb8d85ee604\" returns successfully" Jan 13 20:34:57.616539 systemd[1]: cri-containerd-9701495167f3b68b03537f7568abe1ba9cff06e7b72d398de336afb8d85ee604.scope: Deactivated successfully. Jan 13 20:34:57.656046 containerd[1470]: time="2025-01-13T20:34:57.655826015Z" level=info msg="shim disconnected" id=9701495167f3b68b03537f7568abe1ba9cff06e7b72d398de336afb8d85ee604 namespace=k8s.io Jan 13 20:34:57.656046 containerd[1470]: time="2025-01-13T20:34:57.655881319Z" level=warning msg="cleaning up after shim disconnected" id=9701495167f3b68b03537f7568abe1ba9cff06e7b72d398de336afb8d85ee604 namespace=k8s.io Jan 13 20:34:57.656046 containerd[1470]: time="2025-01-13T20:34:57.655894635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:57.892358 kubelet[2622]: E0113 20:34:57.892293 2622 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:34:58.109228 sshd[4361]: Connection closed by 172.24.4.1 port 57236 Jan 13 20:34:58.110630 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:58.125192 systemd[1]: sshd@22-172.24.4.206:22-172.24.4.1:57236.service: Deactivated successfully. Jan 13 20:34:58.131112 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:34:58.133888 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:34:58.147259 systemd[1]: Started sshd@23-172.24.4.206:22-172.24.4.1:57244.service - OpenSSH per-connection server daemon (172.24.4.1:57244). Jan 13 20:34:58.151015 systemd-logind[1447]: Removed session 25. Jan 13 20:34:58.366950 containerd[1470]: time="2025-01-13T20:34:58.366537327Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:34:58.423430 containerd[1470]: time="2025-01-13T20:34:58.421961596Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0\"" Jan 13 20:34:58.424425 containerd[1470]: time="2025-01-13T20:34:58.423944125Z" level=info msg="StartContainer for \"639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0\"" Jan 13 20:34:58.468750 systemd[1]: Started cri-containerd-639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0.scope - libcontainer container 639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0. Jan 13 20:34:58.498833 containerd[1470]: time="2025-01-13T20:34:58.498663078Z" level=info msg="StartContainer for \"639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0\" returns successfully" Jan 13 20:34:58.504273 systemd[1]: cri-containerd-639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0.scope: Deactivated successfully. Jan 13 20:34:58.533858 containerd[1470]: time="2025-01-13T20:34:58.533760978Z" level=info msg="shim disconnected" id=639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0 namespace=k8s.io Jan 13 20:34:58.533858 containerd[1470]: time="2025-01-13T20:34:58.533814359Z" level=warning msg="cleaning up after shim disconnected" id=639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0 namespace=k8s.io Jan 13 20:34:58.533858 containerd[1470]: time="2025-01-13T20:34:58.533823867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:59.283182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-639404e3922aaad568278c3ce583d8c11730712ac7c9e4fa380843fa12efbfd0-rootfs.mount: Deactivated successfully. Jan 13 20:34:59.329634 sshd[4461]: Accepted publickey for core from 172.24.4.1 port 57244 ssh2: RSA SHA256:REqJp8CMPSQBVFWS4Vn28p5FbEbu0PrVRmFscRLk4ws Jan 13 20:34:59.332386 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:59.342015 systemd-logind[1447]: New session 26 of user core. Jan 13 20:34:59.351609 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:34:59.377639 containerd[1470]: time="2025-01-13T20:34:59.375916934Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:34:59.426386 containerd[1470]: time="2025-01-13T20:34:59.426331105Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6\"" Jan 13 20:34:59.428749 containerd[1470]: time="2025-01-13T20:34:59.428679033Z" level=info msg="StartContainer for \"6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6\"" Jan 13 20:34:59.481757 systemd[1]: Started cri-containerd-6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6.scope - libcontainer container 6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6. Jan 13 20:34:59.547808 containerd[1470]: time="2025-01-13T20:34:59.547750662Z" level=info msg="StartContainer for \"6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6\" returns successfully" Jan 13 20:34:59.555200 systemd[1]: cri-containerd-6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6.scope: Deactivated successfully. Jan 13 20:34:59.597547 containerd[1470]: time="2025-01-13T20:34:59.597469572Z" level=info msg="shim disconnected" id=6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6 namespace=k8s.io Jan 13 20:34:59.597547 containerd[1470]: time="2025-01-13T20:34:59.597537680Z" level=warning msg="cleaning up after shim disconnected" id=6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6 namespace=k8s.io Jan 13 20:34:59.597547 containerd[1470]: time="2025-01-13T20:34:59.597550174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:00.280555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c8a128a80f0d12e17b0cabb1c038ffe647731ae24c2debb1fde48c02c7c7cf6-rootfs.mount: Deactivated successfully. Jan 13 20:35:00.382852 containerd[1470]: time="2025-01-13T20:35:00.382738854Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:35:00.430033 containerd[1470]: time="2025-01-13T20:35:00.429927354Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0\"" Jan 13 20:35:00.434388 containerd[1470]: time="2025-01-13T20:35:00.432624811Z" level=info msg="StartContainer for \"1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0\"" Jan 13 20:35:00.491741 systemd[1]: Started cri-containerd-1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0.scope - libcontainer container 1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0. Jan 13 20:35:00.516962 systemd[1]: cri-containerd-1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0.scope: Deactivated successfully. Jan 13 20:35:00.524141 containerd[1470]: time="2025-01-13T20:35:00.524028363Z" level=info msg="StartContainer for \"1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0\" returns successfully" Jan 13 20:35:00.549998 containerd[1470]: time="2025-01-13T20:35:00.549894132Z" level=info msg="shim disconnected" id=1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0 namespace=k8s.io Jan 13 20:35:00.549998 containerd[1470]: time="2025-01-13T20:35:00.549968461Z" level=warning msg="cleaning up after shim disconnected" id=1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0 namespace=k8s.io Jan 13 20:35:00.549998 containerd[1470]: time="2025-01-13T20:35:00.549981517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:00.716718 kubelet[2622]: I0113 20:35:00.716620 2622 setters.go:600] "Node became not ready" node="ci-4186-1-0-8-e51fb1a5ac.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:35:00Z","lastTransitionTime":"2025-01-13T20:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:35:01.280676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ca1ae9065cb485fb5d1eafc710d6f9578c7220f9ff406ee6d1d4613c71d35b0-rootfs.mount: Deactivated successfully. Jan 13 20:35:01.423682 containerd[1470]: time="2025-01-13T20:35:01.422357401Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:35:01.468611 containerd[1470]: time="2025-01-13T20:35:01.468479232Z" level=info msg="CreateContainer within sandbox \"82ce16c66500aae3cafec54b955b2680e49f2440a7281ace8bceb7ae06877784\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"beb23dcafc2c0784aa35536b5543d64888da59e929a9df88d5d2f321575099a8\"" Jan 13 20:35:01.469649 containerd[1470]: time="2025-01-13T20:35:01.469524813Z" level=info msg="StartContainer for \"beb23dcafc2c0784aa35536b5543d64888da59e929a9df88d5d2f321575099a8\"" Jan 13 20:35:01.503710 systemd[1]: Started cri-containerd-beb23dcafc2c0784aa35536b5543d64888da59e929a9df88d5d2f321575099a8.scope - libcontainer container beb23dcafc2c0784aa35536b5543d64888da59e929a9df88d5d2f321575099a8. Jan 13 20:35:01.540973 containerd[1470]: time="2025-01-13T20:35:01.540931533Z" level=info msg="StartContainer for \"beb23dcafc2c0784aa35536b5543d64888da59e929a9df88d5d2f321575099a8\" returns successfully" Jan 13 20:35:01.915600 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:35:01.965585 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 13 20:35:02.280249 systemd[1]: run-containerd-runc-k8s.io-beb23dcafc2c0784aa35536b5543d64888da59e929a9df88d5d2f321575099a8-runc.2WQ683.mount: Deactivated successfully. Jan 13 20:35:02.438284 kubelet[2622]: I0113 20:35:02.435037 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jxfdx" podStartSLOduration=7.435008645 podStartE2EDuration="7.435008645s" podCreationTimestamp="2025-01-13 20:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:02.434715052 +0000 UTC m=+154.805701709" watchObservedRunningTime="2025-01-13 20:35:02.435008645 +0000 UTC m=+154.805995283" Jan 13 20:35:05.291148 systemd-networkd[1374]: lxc_health: Link UP Jan 13 20:35:05.298823 systemd-networkd[1374]: lxc_health: Gained carrier Jan 13 20:35:06.500731 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 13 20:35:08.619518 systemd[1]: run-containerd-runc-k8s.io-beb23dcafc2c0784aa35536b5543d64888da59e929a9df88d5d2f321575099a8-runc.IyyuAW.mount: Deactivated successfully. Jan 13 20:35:10.858912 kubelet[2622]: E0113 20:35:10.858865 2622 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44066->127.0.0.1:43469: write tcp 127.0.0.1:44066->127.0.0.1:43469: write: broken pipe Jan 13 20:35:11.037473 sshd[4523]: Connection closed by 172.24.4.1 port 57244 Jan 13 20:35:11.040349 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:11.052427 systemd[1]: sshd@23-172.24.4.206:22-172.24.4.1:57244.service: Deactivated successfully. Jan 13 20:35:11.059815 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:35:11.066971 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:35:11.072909 systemd-logind[1447]: Removed session 26.