Jan 30 15:42:20.119524 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 15:42:20.119569 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:42:20.119587 kernel: BIOS-provided physical RAM map: Jan 30 15:42:20.119601 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 15:42:20.119614 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 15:42:20.119631 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 15:42:20.119647 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 30 15:42:20.119661 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 30 15:42:20.119674 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 15:42:20.119688 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 15:42:20.119701 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 30 15:42:20.119715 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 15:42:20.119728 kernel: NX (Execute Disable) protection: active Jan 30 15:42:20.119742 kernel: APIC: Static calls initialized Jan 30 15:42:20.119761 kernel: SMBIOS 3.0.0 present. Jan 30 15:42:20.119776 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 30 15:42:20.119790 kernel: Hypervisor detected: KVM Jan 30 15:42:20.119804 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 15:42:20.119818 kernel: kvm-clock: using sched offset of 3436698001 cycles Jan 30 15:42:20.119837 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 15:42:20.119852 kernel: tsc: Detected 1996.249 MHz processor Jan 30 15:42:20.119867 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 15:42:20.119882 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 15:42:20.119923 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 30 15:42:20.119940 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 15:42:20.119955 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 15:42:20.119970 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 30 15:42:20.119984 kernel: ACPI: Early table checksum verification disabled Jan 30 15:42:20.120003 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 30 15:42:20.120018 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:42:20.120033 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:42:20.120047 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:42:20.120062 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 30 15:42:20.120076 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:42:20.120091 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:42:20.120105 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 30 15:42:20.120119 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 30 15:42:20.120138 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 30 15:42:20.120152 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 30 15:42:20.120167 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 30 15:42:20.120187 kernel: No NUMA configuration found Jan 30 15:42:20.120203 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 30 15:42:20.120218 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 30 15:42:20.120236 kernel: Zone ranges: Jan 30 15:42:20.120252 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 15:42:20.120267 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 15:42:20.120282 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:42:20.120298 kernel: Movable zone start for each node Jan 30 15:42:20.120313 kernel: Early memory node ranges Jan 30 15:42:20.120328 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 15:42:20.120343 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 30 15:42:20.120361 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:42:20.120376 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 30 15:42:20.120392 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 15:42:20.120407 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 15:42:20.120422 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 30 15:42:20.120437 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 15:42:20.120453 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 15:42:20.120468 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 15:42:20.120483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 15:42:20.120501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 15:42:20.120517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 15:42:20.120532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 15:42:20.120547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 15:42:20.120562 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 15:42:20.120578 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 15:42:20.120619 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 15:42:20.120635 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 30 15:42:20.120651 kernel: Booting paravirtualized kernel on KVM Jan 30 15:42:20.120670 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 15:42:20.120686 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 15:42:20.120701 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 15:42:20.120716 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 15:42:20.120733 kernel: pcpu-alloc: [0] 0 1 Jan 30 15:42:20.120755 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 15:42:20.120778 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:42:20.120801 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:42:20.120827 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 15:42:20.120852 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:42:20.120876 kernel: Fallback order for Node 0: 0 Jan 30 15:42:20.122933 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 30 15:42:20.122951 kernel: Policy zone: Normal Jan 30 15:42:20.122960 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:42:20.122968 kernel: software IO TLB: area num 2. Jan 30 15:42:20.122977 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 30 15:42:20.122985 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 15:42:20.122998 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 15:42:20.123006 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 15:42:20.123014 kernel: Dynamic Preempt: voluntary Jan 30 15:42:20.123023 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:42:20.123032 kernel: rcu: RCU event tracing is enabled. Jan 30 15:42:20.123040 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 15:42:20.123048 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:42:20.123057 kernel: Rude variant of Tasks RCU enabled. Jan 30 15:42:20.123065 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:42:20.123076 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:42:20.123084 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 15:42:20.123092 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 15:42:20.123101 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:42:20.123109 kernel: Console: colour VGA+ 80x25 Jan 30 15:42:20.123117 kernel: printk: console [tty0] enabled Jan 30 15:42:20.123125 kernel: printk: console [ttyS0] enabled Jan 30 15:42:20.123133 kernel: ACPI: Core revision 20230628 Jan 30 15:42:20.123142 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 15:42:20.123150 kernel: x2apic enabled Jan 30 15:42:20.123161 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 15:42:20.123170 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 15:42:20.123178 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 15:42:20.123186 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 30 15:42:20.123194 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 15:42:20.123203 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 15:42:20.123211 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 15:42:20.123219 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 15:42:20.123227 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 15:42:20.123239 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 15:42:20.123247 kernel: Speculative Store Bypass: Vulnerable Jan 30 15:42:20.123255 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 30 15:42:20.123263 kernel: Freeing SMP alternatives memory: 32K Jan 30 15:42:20.123279 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:42:20.123290 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:42:20.123298 kernel: landlock: Up and running. Jan 30 15:42:20.123307 kernel: SELinux: Initializing. Jan 30 15:42:20.123315 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:42:20.123324 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:42:20.123333 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 30 15:42:20.123344 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:42:20.123353 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:42:20.123362 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:42:20.123370 kernel: Performance Events: AMD PMU driver. Jan 30 15:42:20.123379 kernel: ... version: 0 Jan 30 15:42:20.123390 kernel: ... bit width: 48 Jan 30 15:42:20.123399 kernel: ... generic registers: 4 Jan 30 15:42:20.123407 kernel: ... value mask: 0000ffffffffffff Jan 30 15:42:20.123416 kernel: ... max period: 00007fffffffffff Jan 30 15:42:20.123424 kernel: ... fixed-purpose events: 0 Jan 30 15:42:20.123433 kernel: ... event mask: 000000000000000f Jan 30 15:42:20.123441 kernel: signal: max sigframe size: 1440 Jan 30 15:42:20.123450 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:42:20.123459 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:42:20.123469 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:42:20.123477 kernel: smpboot: x86: Booting SMP configuration: Jan 30 15:42:20.123486 kernel: .... node #0, CPUs: #1 Jan 30 15:42:20.123495 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 15:42:20.123503 kernel: smpboot: Max logical packages: 2 Jan 30 15:42:20.123512 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 30 15:42:20.123520 kernel: devtmpfs: initialized Jan 30 15:42:20.123529 kernel: x86/mm: Memory block size: 128MB Jan 30 15:42:20.123538 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:42:20.123549 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 15:42:20.123558 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:42:20.123568 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:42:20.123578 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:42:20.123587 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:42:20.123595 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 15:42:20.123604 kernel: audit: type=2000 audit(1738251738.993:1): state=initialized audit_enabled=0 res=1 Jan 30 15:42:20.123613 kernel: cpuidle: using governor menu Jan 30 15:42:20.123621 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:42:20.123634 kernel: dca service started, version 1.12.1 Jan 30 15:42:20.123642 kernel: PCI: Using configuration type 1 for base access Jan 30 15:42:20.123652 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 15:42:20.123661 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:42:20.123670 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:42:20.123679 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:42:20.123688 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:42:20.123697 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:42:20.123705 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:42:20.123716 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 15:42:20.123725 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 15:42:20.123733 kernel: ACPI: Interpreter enabled Jan 30 15:42:20.123742 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 15:42:20.123751 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 15:42:20.123760 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 15:42:20.123769 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 15:42:20.123778 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 15:42:20.123787 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 15:42:20.123953 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:42:20.124060 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 15:42:20.124151 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 15:42:20.124165 kernel: acpiphp: Slot [3] registered Jan 30 15:42:20.124174 kernel: acpiphp: Slot [4] registered Jan 30 15:42:20.124182 kernel: acpiphp: Slot [5] registered Jan 30 15:42:20.124191 kernel: acpiphp: Slot [6] registered Jan 30 15:42:20.124200 kernel: acpiphp: Slot [7] registered Jan 30 15:42:20.124211 kernel: acpiphp: Slot [8] registered Jan 30 15:42:20.124220 kernel: acpiphp: Slot [9] registered Jan 30 15:42:20.124228 kernel: acpiphp: Slot [10] registered Jan 30 15:42:20.124237 kernel: acpiphp: Slot [11] registered Jan 30 15:42:20.124245 kernel: acpiphp: Slot [12] registered Jan 30 15:42:20.124254 kernel: acpiphp: Slot [13] registered Jan 30 15:42:20.124262 kernel: acpiphp: Slot [14] registered Jan 30 15:42:20.124271 kernel: acpiphp: Slot [15] registered Jan 30 15:42:20.124279 kernel: acpiphp: Slot [16] registered Jan 30 15:42:20.124290 kernel: acpiphp: Slot [17] registered Jan 30 15:42:20.124298 kernel: acpiphp: Slot [18] registered Jan 30 15:42:20.124307 kernel: acpiphp: Slot [19] registered Jan 30 15:42:20.124316 kernel: acpiphp: Slot [20] registered Jan 30 15:42:20.124324 kernel: acpiphp: Slot [21] registered Jan 30 15:42:20.124333 kernel: acpiphp: Slot [22] registered Jan 30 15:42:20.124341 kernel: acpiphp: Slot [23] registered Jan 30 15:42:20.124350 kernel: acpiphp: Slot [24] registered Jan 30 15:42:20.124358 kernel: acpiphp: Slot [25] registered Jan 30 15:42:20.124367 kernel: acpiphp: Slot [26] registered Jan 30 15:42:20.124377 kernel: acpiphp: Slot [27] registered Jan 30 15:42:20.124386 kernel: acpiphp: Slot [28] registered Jan 30 15:42:20.124394 kernel: acpiphp: Slot [29] registered Jan 30 15:42:20.124403 kernel: acpiphp: Slot [30] registered Jan 30 15:42:20.124412 kernel: acpiphp: Slot [31] registered Jan 30 15:42:20.124420 kernel: PCI host bridge to bus 0000:00 Jan 30 15:42:20.124513 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 15:42:20.124608 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 15:42:20.124700 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 15:42:20.124786 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 15:42:20.124872 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 30 15:42:20.128012 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 15:42:20.128126 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 15:42:20.128229 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 15:42:20.128337 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 15:42:20.128430 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 30 15:42:20.128523 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 15:42:20.128628 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 15:42:20.128725 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 15:42:20.128823 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 15:42:20.130348 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 15:42:20.130452 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 15:42:20.130543 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 15:42:20.130641 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 15:42:20.130732 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 15:42:20.130822 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 30 15:42:20.131968 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 30 15:42:20.132072 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 30 15:42:20.132171 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 15:42:20.132268 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 15:42:20.132360 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 30 15:42:20.132451 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 30 15:42:20.132541 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 30 15:42:20.132645 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 30 15:42:20.132753 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 15:42:20.132859 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 15:42:20.135014 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 30 15:42:20.135113 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 30 15:42:20.135210 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 15:42:20.135301 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 30 15:42:20.135391 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 30 15:42:20.135492 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 15:42:20.135590 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 30 15:42:20.135682 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 30 15:42:20.135774 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 30 15:42:20.135787 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 15:42:20.135797 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 15:42:20.135805 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 15:42:20.135815 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 15:42:20.135827 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 15:42:20.135836 kernel: iommu: Default domain type: Translated Jan 30 15:42:20.135845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 15:42:20.135854 kernel: PCI: Using ACPI for IRQ routing Jan 30 15:42:20.135862 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 15:42:20.135871 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 15:42:20.135880 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 30 15:42:20.138139 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 15:42:20.138248 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 15:42:20.138346 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 15:42:20.138360 kernel: vgaarb: loaded Jan 30 15:42:20.138369 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 15:42:20.138378 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:42:20.138388 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:42:20.138397 kernel: pnp: PnP ACPI init Jan 30 15:42:20.138488 kernel: pnp 00:03: [dma 2] Jan 30 15:42:20.138503 kernel: pnp: PnP ACPI: found 5 devices Jan 30 15:42:20.138512 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 15:42:20.138525 kernel: NET: Registered PF_INET protocol family Jan 30 15:42:20.138534 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:42:20.138543 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 15:42:20.138552 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:42:20.138561 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 15:42:20.138570 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 15:42:20.138578 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 15:42:20.138587 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:42:20.138598 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:42:20.138607 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:42:20.138616 kernel: NET: Registered PF_XDP protocol family Jan 30 15:42:20.138700 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 15:42:20.138780 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 15:42:20.138858 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 15:42:20.141979 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 30 15:42:20.142068 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 30 15:42:20.142168 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 15:42:20.142270 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 15:42:20.142284 kernel: PCI: CLS 0 bytes, default 64 Jan 30 15:42:20.142293 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 15:42:20.142303 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 30 15:42:20.142312 kernel: Initialise system trusted keyrings Jan 30 15:42:20.142321 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 15:42:20.142330 kernel: Key type asymmetric registered Jan 30 15:42:20.142338 kernel: Asymmetric key parser 'x509' registered Jan 30 15:42:20.142350 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 15:42:20.142359 kernel: io scheduler mq-deadline registered Jan 30 15:42:20.142369 kernel: io scheduler kyber registered Jan 30 15:42:20.142378 kernel: io scheduler bfq registered Jan 30 15:42:20.142386 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 15:42:20.142396 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 15:42:20.142405 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 15:42:20.142414 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 15:42:20.142423 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 15:42:20.142434 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:42:20.142443 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 15:42:20.142452 kernel: random: crng init done Jan 30 15:42:20.142460 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 15:42:20.142469 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 15:42:20.142478 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 15:42:20.142577 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 15:42:20.142664 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 15:42:20.142681 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 15:42:20.142764 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T15:42:19 UTC (1738251739) Jan 30 15:42:20.142847 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 15:42:20.142860 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 15:42:20.142869 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:42:20.142878 kernel: Segment Routing with IPv6 Jan 30 15:42:20.142887 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:42:20.142896 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:42:20.144932 kernel: Key type dns_resolver registered Jan 30 15:42:20.144947 kernel: IPI shorthand broadcast: enabled Jan 30 15:42:20.144957 kernel: sched_clock: Marking stable (1008007668, 179455813)->(1225788917, -38325436) Jan 30 15:42:20.144966 kernel: registered taskstats version 1 Jan 30 15:42:20.144976 kernel: Loading compiled-in X.509 certificates Jan 30 15:42:20.144985 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 15:42:20.145000 kernel: Key type .fscrypt registered Jan 30 15:42:20.145031 kernel: Key type fscrypt-provisioning registered Jan 30 15:42:20.145063 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 15:42:20.145103 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:42:20.145140 kernel: ima: No architecture policies found Jan 30 15:42:20.145171 kernel: clk: Disabling unused clocks Jan 30 15:42:20.145206 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 15:42:20.145237 kernel: Write protecting the kernel read-only data: 36864k Jan 30 15:42:20.145272 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 15:42:20.145303 kernel: Run /init as init process Jan 30 15:42:20.145338 kernel: with arguments: Jan 30 15:42:20.145370 kernel: /init Jan 30 15:42:20.145401 kernel: with environment: Jan 30 15:42:20.145442 kernel: HOME=/ Jan 30 15:42:20.145474 kernel: TERM=linux Jan 30 15:42:20.145506 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:42:20.145552 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:42:20.145600 systemd[1]: Detected virtualization kvm. Jan 30 15:42:20.145639 systemd[1]: Detected architecture x86-64. Jan 30 15:42:20.145673 systemd[1]: Running in initrd. Jan 30 15:42:20.145719 systemd[1]: No hostname configured, using default hostname. Jan 30 15:42:20.145759 systemd[1]: Hostname set to . Jan 30 15:42:20.145796 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:42:20.145815 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:42:20.145826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:42:20.145836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:42:20.145849 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:42:20.145869 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:42:20.145883 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:42:20.145895 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:42:20.145917 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:42:20.145928 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:42:20.145940 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:42:20.145950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:42:20.145960 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:42:20.145970 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:42:20.145980 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:42:20.145989 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:42:20.145999 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:42:20.146009 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:42:20.146019 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:42:20.146030 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:42:20.146040 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:42:20.146050 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:42:20.146060 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:42:20.146070 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:42:20.146079 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:42:20.146089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:42:20.146099 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:42:20.146109 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:42:20.146304 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:42:20.146317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:42:20.146327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:42:20.146337 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:42:20.146347 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:42:20.146379 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 15:42:20.146408 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:42:20.146422 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:42:20.146433 systemd-journald[184]: Journal started Jan 30 15:42:20.146456 systemd-journald[184]: Runtime Journal (/run/log/journal/ba5008f832074ccc9935cc1cff223ce0) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:42:20.146942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:42:20.146964 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:42:20.113845 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 15:42:20.189819 kernel: Bridge firewalling registered Jan 30 15:42:20.148717 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 15:42:20.193231 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:42:20.195154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:42:20.196797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:42:20.210265 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:42:20.212966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:42:20.219020 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:42:20.226162 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:42:20.228871 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:42:20.242195 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:42:20.246051 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:42:20.249214 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:42:20.254185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:42:20.269223 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:42:20.277390 dracut-cmdline[216]: dracut-dracut-053 Jan 30 15:42:20.281862 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:42:20.304094 systemd-resolved[222]: Positive Trust Anchors: Jan 30 15:42:20.304110 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:42:20.304152 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:42:20.307055 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 30 15:42:20.307921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:42:20.311454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:42:20.365972 kernel: SCSI subsystem initialized Jan 30 15:42:20.376929 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:42:20.387956 kernel: iscsi: registered transport (tcp) Jan 30 15:42:20.409982 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:42:20.410009 kernel: QLogic iSCSI HBA Driver Jan 30 15:42:20.459149 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:42:20.465172 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:42:20.497328 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:42:20.497416 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:42:20.499307 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:42:20.575028 kernel: raid6: sse2x4 gen() 5208 MB/s Jan 30 15:42:20.592998 kernel: raid6: sse2x2 gen() 6203 MB/s Jan 30 15:42:20.611403 kernel: raid6: sse2x1 gen() 9628 MB/s Jan 30 15:42:20.611526 kernel: raid6: using algorithm sse2x1 gen() 9628 MB/s Jan 30 15:42:20.630416 kernel: raid6: .... xor() 7144 MB/s, rmw enabled Jan 30 15:42:20.630514 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 15:42:20.651959 kernel: xor: measuring software checksum speed Jan 30 15:42:20.652064 kernel: prefetch64-sse : 16891 MB/sec Jan 30 15:42:20.654267 kernel: generic_sse : 16689 MB/sec Jan 30 15:42:20.654312 kernel: xor: using function: prefetch64-sse (16891 MB/sec) Jan 30 15:42:20.846973 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:42:20.865175 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:42:20.874210 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:42:20.887954 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 15:42:20.892445 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:42:20.905243 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:42:20.922457 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 30 15:42:20.969826 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:42:20.980265 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:42:21.025460 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:42:21.037293 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:42:21.082728 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:42:21.085951 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:42:21.087481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:42:21.088689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:42:21.095128 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:42:21.105924 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 30 15:42:21.133038 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 30 15:42:21.133176 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:42:21.133199 kernel: GPT:17805311 != 20971519 Jan 30 15:42:21.133212 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:42:21.133224 kernel: GPT:17805311 != 20971519 Jan 30 15:42:21.133235 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:42:21.133247 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:42:21.113398 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:42:21.139725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:42:21.139882 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:42:21.141432 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:42:21.143574 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:42:21.143706 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:42:21.145405 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:42:21.152190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:42:21.175951 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Jan 30 15:42:21.176008 kernel: libata version 3.00 loaded. Jan 30 15:42:21.181932 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 15:42:21.193264 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (463) Jan 30 15:42:21.193283 kernel: scsi host0: ata_piix Jan 30 15:42:21.193426 kernel: scsi host1: ata_piix Jan 30 15:42:21.193550 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 30 15:42:21.193565 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 30 15:42:21.185116 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 15:42:21.210819 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 15:42:21.243177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:42:21.249464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:42:21.254175 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 15:42:21.254774 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 15:42:21.270215 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:42:21.274378 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:42:21.284082 disk-uuid[504]: Primary Header is updated. Jan 30 15:42:21.284082 disk-uuid[504]: Secondary Entries is updated. Jan 30 15:42:21.284082 disk-uuid[504]: Secondary Header is updated. Jan 30 15:42:21.299489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:42:21.299829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:42:21.309942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:42:22.325045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:42:22.327490 disk-uuid[510]: The operation has completed successfully. Jan 30 15:42:22.404566 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:42:22.404817 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:42:22.452090 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:42:22.457384 sh[528]: Success Jan 30 15:42:22.485972 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 30 15:42:22.570104 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:42:22.585493 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:42:22.590459 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:42:22.637715 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 15:42:22.637826 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:42:22.637857 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:42:22.644171 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:42:22.648071 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:42:22.674778 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:42:22.677326 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:42:22.684230 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:42:22.694185 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:42:22.724123 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:42:22.724216 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:42:22.728580 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:42:22.741964 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:42:22.759458 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:42:22.759138 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 15:42:22.774333 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:42:22.780079 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:42:22.829039 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:42:22.838136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:42:22.857265 systemd-networkd[711]: lo: Link UP Jan 30 15:42:22.857275 systemd-networkd[711]: lo: Gained carrier Jan 30 15:42:22.858369 systemd-networkd[711]: Enumeration completed Jan 30 15:42:22.858886 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:42:22.858890 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:42:22.859886 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:42:22.860045 systemd-networkd[711]: eth0: Link UP Jan 30 15:42:22.860049 systemd-networkd[711]: eth0: Gained carrier Jan 30 15:42:22.860057 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:42:22.860567 systemd[1]: Reached target network.target - Network. Jan 30 15:42:22.873585 systemd-networkd[711]: eth0: DHCPv4 address 172.24.4.139/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:42:22.923480 ignition[641]: Ignition 2.19.0 Jan 30 15:42:22.923494 ignition[641]: Stage: fetch-offline Jan 30 15:42:22.925125 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:42:22.923535 ignition[641]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:42:22.923547 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:42:22.923652 ignition[641]: parsed url from cmdline: "" Jan 30 15:42:22.923656 ignition[641]: no config URL provided Jan 30 15:42:22.923663 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:42:22.923673 ignition[641]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:42:22.923679 ignition[641]: failed to fetch config: resource requires networking Jan 30 15:42:22.923880 ignition[641]: Ignition finished successfully Jan 30 15:42:22.933108 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 15:42:22.947520 ignition[721]: Ignition 2.19.0 Jan 30 15:42:22.948357 ignition[721]: Stage: fetch Jan 30 15:42:22.948535 ignition[721]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:42:22.948547 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:42:22.948653 ignition[721]: parsed url from cmdline: "" Jan 30 15:42:22.948657 ignition[721]: no config URL provided Jan 30 15:42:22.948663 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:42:22.948672 ignition[721]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:42:22.948793 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 15:42:22.948813 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 15:42:22.948843 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 15:42:23.117889 ignition[721]: GET result: OK Jan 30 15:42:23.118117 ignition[721]: parsing config with SHA512: a16795896a51210fcaed8c8dc5154a3d395a1e5c34e0d0e919be518bcc98ce81be5aff99ee6e026e4095b2de1af11873cf6793e94ec32dfbb9f485ec17584a5a Jan 30 15:42:23.124228 unknown[721]: fetched base config from "system" Jan 30 15:42:23.124240 unknown[721]: fetched base config from "system" Jan 30 15:42:23.124691 ignition[721]: fetch: fetch complete Jan 30 15:42:23.124247 unknown[721]: fetched user config from "openstack" Jan 30 15:42:23.124698 ignition[721]: fetch: fetch passed Jan 30 15:42:23.128115 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 15:42:23.124743 ignition[721]: Ignition finished successfully Jan 30 15:42:23.135094 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:42:23.162573 ignition[727]: Ignition 2.19.0 Jan 30 15:42:23.163131 ignition[727]: Stage: kargs Jan 30 15:42:23.163397 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:42:23.166000 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:42:23.163410 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:42:23.164440 ignition[727]: kargs: kargs passed Jan 30 15:42:23.164489 ignition[727]: Ignition finished successfully Jan 30 15:42:23.176075 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:42:23.202444 ignition[733]: Ignition 2.19.0 Jan 30 15:42:23.202466 ignition[733]: Stage: disks Jan 30 15:42:23.202797 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:42:23.202819 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:42:23.204640 ignition[733]: disks: disks passed Jan 30 15:42:23.204725 ignition[733]: Ignition finished successfully Jan 30 15:42:23.205942 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:42:23.207197 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:42:23.207770 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:42:23.209069 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:42:23.210242 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:42:23.211237 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:42:23.217029 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:42:23.238947 systemd-fsck[741]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 15:42:23.250911 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:42:23.258046 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:42:23.401940 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 15:42:23.403721 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:42:23.405779 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:42:23.417079 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:42:23.420824 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:42:23.422326 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 15:42:23.428720 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 15:42:23.430953 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:42:23.430987 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:42:23.435709 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (749) Jan 30 15:42:23.437809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:42:23.439058 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:42:23.439923 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:42:23.439945 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:42:23.448328 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:42:23.460146 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:42:23.461892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:42:23.571339 initrd-setup-root[777]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:42:23.581190 initrd-setup-root[784]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:42:23.586682 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:42:23.603303 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:42:23.740249 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:42:23.746192 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:42:23.752368 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:42:23.762262 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:42:23.767639 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:42:23.790243 ignition[865]: INFO : Ignition 2.19.0 Jan 30 15:42:23.790243 ignition[865]: INFO : Stage: mount Jan 30 15:42:23.790243 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:42:23.790243 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:42:23.794872 ignition[865]: INFO : mount: mount passed Jan 30 15:42:23.794872 ignition[865]: INFO : Ignition finished successfully Jan 30 15:42:23.794728 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:42:23.801153 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:42:24.565692 systemd-networkd[711]: eth0: Gained IPv6LL Jan 30 15:42:30.691519 coreos-metadata[751]: Jan 30 15:42:30.691 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:42:30.734471 coreos-metadata[751]: Jan 30 15:42:30.734 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:42:30.750427 coreos-metadata[751]: Jan 30 15:42:30.750 INFO Fetch successful Jan 30 15:42:30.752108 coreos-metadata[751]: Jan 30 15:42:30.750 INFO wrote hostname ci-4081-3-0-c-719cef3df4.novalocal to /sysroot/etc/hostname Jan 30 15:42:30.756214 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 15:42:30.756456 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 15:42:30.771166 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:42:30.798249 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:42:30.816041 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (882) Jan 30 15:42:30.823754 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:42:30.823818 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:42:30.828084 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:42:30.840031 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:42:30.844529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:42:30.882697 ignition[900]: INFO : Ignition 2.19.0 Jan 30 15:42:30.882697 ignition[900]: INFO : Stage: files Jan 30 15:42:30.882697 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:42:30.882697 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:42:30.891117 ignition[900]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:42:30.891117 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:42:30.891117 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:42:30.891117 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:42:30.899010 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:42:30.899010 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:42:30.899010 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:42:30.899010 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 15:42:30.891502 unknown[900]: wrote ssh authorized keys file for user: core Jan 30 15:42:30.954389 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 15:42:31.253211 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:42:31.253211 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 15:42:31.253211 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 15:42:31.895157 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 15:42:32.474410 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 15:42:32.474410 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:42:32.479687 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 15:42:32.972328 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 15:42:34.553838 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:42:34.553838 ignition[900]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 15:42:34.589760 ignition[900]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:42:34.592736 ignition[900]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:42:34.592736 ignition[900]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 15:42:34.592736 ignition[900]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 15:42:34.592736 ignition[900]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 15:42:34.592736 ignition[900]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:42:34.592736 ignition[900]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:42:34.592736 ignition[900]: INFO : files: files passed Jan 30 15:42:34.592736 ignition[900]: INFO : Ignition finished successfully Jan 30 15:42:34.592799 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:42:34.604087 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:42:34.607034 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:42:34.613988 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:42:34.614095 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:42:34.628408 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:42:34.628408 initrd-setup-root-after-ignition[929]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:42:34.630377 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:42:34.632446 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:42:34.635639 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:42:34.646359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:42:34.673016 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:42:34.673228 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:42:34.677474 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:42:34.678527 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:42:34.680762 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:42:34.690225 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:42:34.706658 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:42:34.714246 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:42:34.724880 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:42:34.725756 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:42:34.728081 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:42:34.731535 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:42:34.731671 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:42:34.734937 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:42:34.736080 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:42:34.737878 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:42:34.739723 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:42:34.741929 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:42:34.744143 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:42:34.746352 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:42:34.748501 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:42:34.750727 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:42:34.752876 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:42:34.754657 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:42:34.754771 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:42:34.757210 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:42:34.758377 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:42:34.760489 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:42:34.761404 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:42:34.762762 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:42:34.762914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:42:34.765684 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:42:34.765810 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:42:34.766847 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:42:34.766978 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:42:34.776103 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:42:34.780128 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:42:34.783051 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:42:34.784478 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:42:34.785883 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:42:34.786015 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:42:34.792675 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:42:34.793318 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:42:34.796765 ignition[953]: INFO : Ignition 2.19.0 Jan 30 15:42:34.796765 ignition[953]: INFO : Stage: umount Jan 30 15:42:34.799648 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:42:34.799648 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:42:34.799648 ignition[953]: INFO : umount: umount passed Jan 30 15:42:34.799648 ignition[953]: INFO : Ignition finished successfully Jan 30 15:42:34.799109 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:42:34.799222 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:42:34.802191 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:42:34.802265 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:42:34.802802 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:42:34.802847 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:42:34.805127 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 15:42:34.805166 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 15:42:34.806865 systemd[1]: Stopped target network.target - Network. Jan 30 15:42:34.809058 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:42:34.809145 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:42:34.809883 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:42:34.810348 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:42:34.815687 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:42:34.817011 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:42:34.818010 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:42:34.818562 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:42:34.818602 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:42:34.819578 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:42:34.819611 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:42:34.820571 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:42:34.820622 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:42:34.821837 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:42:34.821879 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:42:34.822958 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:42:34.823975 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:42:34.826239 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:42:34.826753 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:42:34.826845 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:42:34.828259 systemd-networkd[711]: eth0: DHCPv6 lease lost Jan 30 15:42:34.828941 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:42:34.829014 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:42:34.829879 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:42:34.830010 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:42:34.832482 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:42:34.832613 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:42:34.834308 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:42:34.834592 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:42:34.843015 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:42:34.843800 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:42:34.843860 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:42:34.846412 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:42:34.846455 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:42:34.847490 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:42:34.847530 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:42:34.848756 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:42:34.848807 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:42:34.850192 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:42:34.866258 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:42:34.866417 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:42:34.869171 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:42:34.869263 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:42:34.871708 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:42:34.871777 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:42:34.872505 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:42:34.872543 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:42:34.873688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:42:34.873737 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:42:34.875403 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:42:34.875458 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:42:34.876614 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:42:34.876654 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:42:34.883090 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:42:34.883676 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:42:34.883732 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:42:34.886290 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 15:42:34.886339 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:42:34.889684 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:42:34.889730 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:42:34.890965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:42:34.891007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:42:34.892693 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:42:34.892786 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:42:34.893778 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:42:34.900077 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:42:34.907201 systemd[1]: Switching root. Jan 30 15:42:34.943043 systemd-journald[184]: Journal stopped Jan 30 15:42:36.663584 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 15:42:36.663642 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 15:42:36.663659 kernel: SELinux: policy capability open_perms=1 Jan 30 15:42:36.663671 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 15:42:36.663682 kernel: SELinux: policy capability always_check_network=0 Jan 30 15:42:36.663693 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 15:42:36.663706 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 15:42:36.663721 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 15:42:36.663737 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 15:42:36.663748 systemd[1]: Successfully loaded SELinux policy in 72.340ms. Jan 30 15:42:36.663764 kernel: audit: type=1403 audit(1738251755.650:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 15:42:36.665942 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.735ms. Jan 30 15:42:36.665969 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:42:36.665981 systemd[1]: Detected virtualization kvm. Jan 30 15:42:36.665994 systemd[1]: Detected architecture x86-64. Jan 30 15:42:36.666009 systemd[1]: Detected first boot. Jan 30 15:42:36.666022 systemd[1]: Hostname set to . Jan 30 15:42:36.666034 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:42:36.666045 zram_generator::config[996]: No configuration found. Jan 30 15:42:36.666062 systemd[1]: Populated /etc with preset unit settings. Jan 30 15:42:36.666074 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 15:42:36.666085 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 15:42:36.666097 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 15:42:36.666109 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 15:42:36.666124 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 15:42:36.666135 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 15:42:36.666148 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 15:42:36.666159 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 15:42:36.666174 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 15:42:36.666186 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 15:42:36.666198 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 15:42:36.666210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:42:36.666224 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:42:36.666240 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 15:42:36.666254 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 15:42:36.666266 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 15:42:36.666278 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:42:36.666290 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 15:42:36.666301 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:42:36.666313 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 15:42:36.666327 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 15:42:36.666339 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 15:42:36.666351 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 15:42:36.666363 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:42:36.666375 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:42:36.666386 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:42:36.666398 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:42:36.666409 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 15:42:36.666423 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 15:42:36.666435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:42:36.666447 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:42:36.666459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:42:36.666470 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 15:42:36.666482 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 15:42:36.666494 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 15:42:36.666507 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 15:42:36.666519 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:42:36.666533 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 15:42:36.666545 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 15:42:36.666557 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 15:42:36.666570 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 15:42:36.666582 systemd[1]: Reached target machines.target - Containers. Jan 30 15:42:36.666593 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 15:42:36.666605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:42:36.666617 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:42:36.666631 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 15:42:36.666642 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:42:36.666654 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:42:36.666666 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:42:36.666678 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 15:42:36.666689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:42:36.666701 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 15:42:36.666713 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 15:42:36.666727 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 15:42:36.666739 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 15:42:36.666751 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 15:42:36.666764 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:42:36.666776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:42:36.666788 kernel: fuse: init (API version 7.39) Jan 30 15:42:36.666800 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 15:42:36.666811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 15:42:36.666823 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:42:36.666837 kernel: ACPI: bus type drm_connector registered Jan 30 15:42:36.666848 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 15:42:36.666860 systemd[1]: Stopped verity-setup.service. Jan 30 15:42:36.666872 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:42:36.666884 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 15:42:36.666910 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 15:42:36.666949 systemd-journald[1082]: Collecting audit messages is disabled. Jan 30 15:42:36.666979 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 15:42:36.666994 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 15:42:36.667007 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 15:42:36.667019 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 15:42:36.667032 systemd-journald[1082]: Journal started Jan 30 15:42:36.667056 systemd-journald[1082]: Runtime Journal (/run/log/journal/ba5008f832074ccc9935cc1cff223ce0) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:42:36.315565 systemd[1]: Queued start job for default target multi-user.target. Jan 30 15:42:36.337455 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 15:42:36.337888 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 15:42:36.670922 kernel: loop: module loaded Jan 30 15:42:36.670953 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:42:36.674129 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:42:36.674932 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 15:42:36.675667 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 15:42:36.675800 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 15:42:36.676679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:42:36.676801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:42:36.677540 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:42:36.677668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:42:36.678434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:42:36.678557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:42:36.679318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 15:42:36.679439 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 15:42:36.680223 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:42:36.680354 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:42:36.681169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:42:36.681864 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 15:42:36.682739 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 15:42:36.693486 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 15:42:36.699743 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 15:42:36.705012 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 15:42:36.705594 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 15:42:36.705634 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:42:36.709270 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 15:42:36.715074 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 15:42:36.718646 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 15:42:36.719875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:42:36.729071 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 15:42:36.731075 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 15:42:36.731672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:42:36.735030 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 15:42:36.735669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:42:36.739187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:42:36.743046 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 15:42:36.750101 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:42:36.753488 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:42:36.755614 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 15:42:36.757421 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 15:42:36.758322 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 15:42:36.773100 systemd-journald[1082]: Time spent on flushing to /var/log/journal/ba5008f832074ccc9935cc1cff223ce0 is 49.833ms for 951 entries. Jan 30 15:42:36.773100 systemd-journald[1082]: System Journal (/var/log/journal/ba5008f832074ccc9935cc1cff223ce0) is 8.0M, max 584.8M, 576.8M free. Jan 30 15:42:36.839123 systemd-journald[1082]: Received client request to flush runtime journal. Jan 30 15:42:36.839164 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 15:42:36.777107 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 15:42:36.792208 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 15:42:36.793020 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 15:42:36.803017 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 15:42:36.820611 udevadm[1137]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 15:42:36.823983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:42:36.843379 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 15:42:36.893964 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 15:42:36.894161 systemd-tmpfiles[1130]: ACLs are not supported, ignoring. Jan 30 15:42:36.894177 systemd-tmpfiles[1130]: ACLs are not supported, ignoring. Jan 30 15:42:36.903332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:42:36.914419 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 15:42:36.915746 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 15:42:36.918963 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 15:42:36.925943 kernel: loop1: detected capacity change from 0 to 8 Jan 30 15:42:36.957287 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 15:42:36.977213 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 15:42:36.983411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:42:37.018531 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 30 15:42:37.018553 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 30 15:42:37.025465 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:42:37.027917 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 15:42:37.091939 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 15:42:37.161982 kernel: loop5: detected capacity change from 0 to 8 Jan 30 15:42:37.169201 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 15:42:37.231964 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 15:42:37.288337 (sd-merge)[1159]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 15:42:37.288972 (sd-merge)[1159]: Merged extensions into '/usr'. Jan 30 15:42:37.295579 systemd[1]: Reloading requested from client PID 1129 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 15:42:37.295596 systemd[1]: Reloading... Jan 30 15:42:37.405975 zram_generator::config[1188]: No configuration found. Jan 30 15:42:37.568834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:42:37.634192 systemd[1]: Reloading finished in 338 ms. Jan 30 15:42:37.665811 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 15:42:37.675071 systemd[1]: Starting ensure-sysext.service... Jan 30 15:42:37.676495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:42:37.703014 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 15:42:37.703379 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 15:42:37.703385 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jan 30 15:42:37.703395 systemd[1]: Reloading... Jan 30 15:42:37.704261 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 15:42:37.704569 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 30 15:42:37.704637 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jan 30 15:42:37.710461 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:42:37.710477 systemd-tmpfiles[1241]: Skipping /boot Jan 30 15:42:37.725306 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:42:37.725319 systemd-tmpfiles[1241]: Skipping /boot Jan 30 15:42:37.806937 zram_generator::config[1272]: No configuration found. Jan 30 15:42:37.872479 ldconfig[1124]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 15:42:37.955084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:42:38.012129 systemd[1]: Reloading finished in 308 ms. Jan 30 15:42:38.029497 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 15:42:38.030676 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 15:42:38.031635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:42:38.047076 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:42:38.056124 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 15:42:38.061088 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 15:42:38.065168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:42:38.068758 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:42:38.070573 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 15:42:38.078697 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:42:38.079040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:42:38.086250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:42:38.090186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:42:38.093174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:42:38.094109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:42:38.094254 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:42:38.099156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:42:38.099331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:42:38.099500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:42:38.099604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:42:38.103268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:42:38.103507 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:42:38.114305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:42:38.116105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:42:38.116294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:42:38.119941 systemd[1]: Finished ensure-sysext.service. Jan 30 15:42:38.129145 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 15:42:38.134068 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 15:42:38.146298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:42:38.146522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:42:38.151567 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 15:42:38.166177 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:42:38.166499 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:42:38.171721 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:42:38.171870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:42:38.175485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 15:42:38.176428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:42:38.176602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:42:38.179225 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:42:38.179289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:42:38.182760 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 30 15:42:38.186855 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 15:42:38.205226 augenrules[1363]: No rules Jan 30 15:42:38.207416 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:42:38.212668 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 15:42:38.227287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:42:38.236081 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:42:38.239563 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 15:42:38.240341 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:42:38.241231 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 15:42:38.363193 systemd-resolved[1333]: Positive Trust Anchors: Jan 30 15:42:38.363208 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:42:38.363253 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:42:38.373234 systemd-resolved[1333]: Using system hostname 'ci-4081-3-0-c-719cef3df4.novalocal'. Jan 30 15:42:38.374944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:42:38.375600 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:42:38.377587 systemd-networkd[1374]: lo: Link UP Jan 30 15:42:38.377840 systemd-networkd[1374]: lo: Gained carrier Jan 30 15:42:38.383096 systemd-networkd[1374]: Enumeration completed Jan 30 15:42:38.383691 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:42:38.383698 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:42:38.383792 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:42:38.384379 systemd-networkd[1374]: eth0: Link UP Jan 30 15:42:38.384383 systemd-networkd[1374]: eth0: Gained carrier Jan 30 15:42:38.384398 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:42:38.384566 systemd[1]: Reached target network.target - Network. Jan 30 15:42:38.391767 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 15:42:38.392557 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 15:42:38.393880 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 15:42:38.396988 systemd-networkd[1374]: eth0: DHCPv4 address 172.24.4.139/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:42:38.397557 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 15:42:38.399046 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 30 15:42:38.849879 systemd-timesyncd[1346]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jan 30 15:42:38.849926 systemd-timesyncd[1346]: Initial clock synchronization to Thu 2025-01-30 15:42:38.849787 UTC. Jan 30 15:42:38.851338 systemd-resolved[1333]: Clock change detected. Flushing caches. Jan 30 15:42:38.856317 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:42:38.869625 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1375) Jan 30 15:42:38.899064 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 15:42:38.926570 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 15:42:38.937573 kernel: ACPI: button: Power Button [PWRF] Jan 30 15:42:38.951109 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 15:42:38.964204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:42:38.974651 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 15:42:38.974710 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 15:42:38.975426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 15:42:38.979103 kernel: Console: switching to colour dummy device 80x25 Jan 30 15:42:38.981585 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 15:42:38.984304 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 15:42:38.984344 kernel: [drm] features: -context_init Jan 30 15:42:38.986587 kernel: [drm] number of scanouts: 1 Jan 30 15:42:38.987554 kernel: [drm] number of cap sets: 0 Jan 30 15:42:38.991471 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 15:42:38.993844 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 15:42:38.993879 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 15:42:38.998683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:42:39.008564 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 15:42:39.019965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:42:39.020162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:42:39.026375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:42:39.031030 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 15:42:39.034835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:42:39.035012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:42:39.042679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:42:39.043619 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 15:42:39.046078 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 15:42:39.079280 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:42:39.109839 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 15:42:39.110679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:42:39.113825 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 15:42:39.126563 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:42:39.134185 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:42:39.136244 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:42:39.136443 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 15:42:39.136765 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 15:42:39.137639 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 15:42:39.137822 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 15:42:39.137921 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 15:42:39.137992 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 15:42:39.138020 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:42:39.138089 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:42:39.139054 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 15:42:39.140663 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 15:42:39.146506 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 15:42:39.147860 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 15:42:39.148463 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:42:39.152846 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:42:39.156424 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:42:39.156577 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:42:39.164681 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 15:42:39.168752 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 15:42:39.175807 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 15:42:39.188698 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 15:42:39.193571 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 15:42:39.194712 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 15:42:39.202490 jq[1435]: false Jan 30 15:42:39.202768 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 15:42:39.208816 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 15:42:39.221769 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 15:42:39.231783 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 15:42:39.234948 extend-filesystems[1436]: Found loop4 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found loop5 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found loop6 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found loop7 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found vda Jan 30 15:42:39.234948 extend-filesystems[1436]: Found vda1 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found vda2 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found vda3 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found usr Jan 30 15:42:39.234948 extend-filesystems[1436]: Found vda4 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found vda6 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found vda7 Jan 30 15:42:39.234948 extend-filesystems[1436]: Found vda9 Jan 30 15:42:39.234948 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 30 15:42:39.347293 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 30 15:42:39.347340 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 30 15:42:39.347359 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1391) Jan 30 15:42:39.254782 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 15:42:39.291259 dbus-daemon[1432]: [system] SELinux support is enabled Jan 30 15:42:39.350691 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 30 15:42:39.269707 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 15:42:39.353936 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Jan 30 15:42:39.353936 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 15:42:39.353936 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 15:42:39.353936 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 30 15:42:39.271789 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 15:42:39.377664 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 30 15:42:39.278806 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 15:42:39.393111 update_engine[1451]: I20250130 15:42:39.362159 1451 main.cc:92] Flatcar Update Engine starting Jan 30 15:42:39.393111 update_engine[1451]: I20250130 15:42:39.364183 1451 update_check_scheduler.cc:74] Next update check in 10m17s Jan 30 15:42:39.314472 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 15:42:39.393956 jq[1457]: true Jan 30 15:42:39.316012 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 15:42:39.325641 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 15:42:39.340929 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 15:42:39.341106 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 15:42:39.341408 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 15:42:39.341587 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 15:42:39.366506 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 15:42:39.366727 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 15:42:39.373824 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 15:42:39.374008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 15:42:39.403337 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 15:42:39.417890 jq[1462]: true Jan 30 15:42:39.438263 systemd[1]: Started update-engine.service - Update Engine. Jan 30 15:42:39.441982 tar[1461]: linux-amd64/helm Jan 30 15:42:39.442984 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 15:42:39.443038 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 15:42:39.443699 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 15:42:39.443717 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 15:42:39.453747 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 15:42:39.476141 systemd-logind[1444]: New seat seat0. Jan 30 15:42:39.480342 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 15:42:39.481475 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 15:42:39.481736 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 15:42:39.610087 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:42:39.618485 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 15:42:39.636847 systemd[1]: Starting sshkeys.service... Jan 30 15:42:39.655619 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 15:42:39.664935 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 15:42:39.726711 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 15:42:39.862603 containerd[1464]: time="2025-01-30T15:42:39.862481323Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 15:42:39.896038 containerd[1464]: time="2025-01-30T15:42:39.895976654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:42:39.898089 containerd[1464]: time="2025-01-30T15:42:39.898057025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:42:39.898175 containerd[1464]: time="2025-01-30T15:42:39.898160039Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 15:42:39.898237 containerd[1464]: time="2025-01-30T15:42:39.898223257Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 15:42:39.898440 containerd[1464]: time="2025-01-30T15:42:39.898421699Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 15:42:39.898503 containerd[1464]: time="2025-01-30T15:42:39.898490298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 15:42:39.898639 containerd[1464]: time="2025-01-30T15:42:39.898619030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:42:39.898714 containerd[1464]: time="2025-01-30T15:42:39.898700061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:42:39.898930 containerd[1464]: time="2025-01-30T15:42:39.898910216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:42:39.899005 containerd[1464]: time="2025-01-30T15:42:39.898988352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 15:42:39.899066 containerd[1464]: time="2025-01-30T15:42:39.899051671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:42:39.899127 containerd[1464]: time="2025-01-30T15:42:39.899111623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 15:42:39.899254 containerd[1464]: time="2025-01-30T15:42:39.899236628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:42:39.899505 containerd[1464]: time="2025-01-30T15:42:39.899487098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:42:39.899836 containerd[1464]: time="2025-01-30T15:42:39.899727859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:42:39.899836 containerd[1464]: time="2025-01-30T15:42:39.899748788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 15:42:39.900072 containerd[1464]: time="2025-01-30T15:42:39.899944886Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 15:42:39.900072 containerd[1464]: time="2025-01-30T15:42:39.900044924Z" level=info msg="metadata content store policy set" policy=shared Jan 30 15:42:39.908480 containerd[1464]: time="2025-01-30T15:42:39.908194729Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 15:42:39.908480 containerd[1464]: time="2025-01-30T15:42:39.908248780Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 15:42:39.908480 containerd[1464]: time="2025-01-30T15:42:39.908266133Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 15:42:39.908480 containerd[1464]: time="2025-01-30T15:42:39.908290208Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 15:42:39.908480 containerd[1464]: time="2025-01-30T15:42:39.908306909Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 15:42:39.908480 containerd[1464]: time="2025-01-30T15:42:39.908432004Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.910902387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911083877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911108493Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911126537Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911158087Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911178755Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911197230Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911218099Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911238728Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911258375Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911272541Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911290405Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911318097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912552 containerd[1464]: time="2025-01-30T15:42:39.911339116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911358052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911378169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911397095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911418094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911436288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911454893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911474119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911494397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911511960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911548839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911565671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911600667Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911629481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911643958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.912862 containerd[1464]: time="2025-01-30T15:42:39.911661280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 15:42:39.913152 containerd[1464]: time="2025-01-30T15:42:39.911714761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 15:42:39.913152 containerd[1464]: time="2025-01-30T15:42:39.911739737Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 15:42:39.913152 containerd[1464]: time="2025-01-30T15:42:39.911752581Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 15:42:39.913152 containerd[1464]: time="2025-01-30T15:42:39.911771487Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 15:42:39.913152 containerd[1464]: time="2025-01-30T15:42:39.911787517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.913152 containerd[1464]: time="2025-01-30T15:42:39.911807344Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 15:42:39.913152 containerd[1464]: time="2025-01-30T15:42:39.911825077Z" level=info msg="NRI interface is disabled by configuration." Jan 30 15:42:39.913152 containerd[1464]: time="2025-01-30T15:42:39.911836419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 15:42:39.913327 containerd[1464]: time="2025-01-30T15:42:39.912155828Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 15:42:39.913327 containerd[1464]: time="2025-01-30T15:42:39.912234646Z" level=info msg="Connect containerd service" Jan 30 15:42:39.913327 containerd[1464]: time="2025-01-30T15:42:39.912276895Z" level=info msg="using legacy CRI server" Jan 30 15:42:39.913327 containerd[1464]: time="2025-01-30T15:42:39.912285050Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 15:42:39.913327 containerd[1464]: time="2025-01-30T15:42:39.912397711Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 15:42:39.914339 containerd[1464]: time="2025-01-30T15:42:39.914316089Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:42:39.914731 containerd[1464]: time="2025-01-30T15:42:39.914713855Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 15:42:39.914845 containerd[1464]: time="2025-01-30T15:42:39.914829112Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 15:42:39.914983 containerd[1464]: time="2025-01-30T15:42:39.914942965Z" level=info msg="Start subscribing containerd event" Jan 30 15:42:39.915120 containerd[1464]: time="2025-01-30T15:42:39.915060495Z" level=info msg="Start recovering state" Jan 30 15:42:39.918567 containerd[1464]: time="2025-01-30T15:42:39.915214003Z" level=info msg="Start event monitor" Jan 30 15:42:39.918567 containerd[1464]: time="2025-01-30T15:42:39.915240343Z" level=info msg="Start snapshots syncer" Jan 30 15:42:39.918567 containerd[1464]: time="2025-01-30T15:42:39.915275439Z" level=info msg="Start cni network conf syncer for default" Jan 30 15:42:39.918567 containerd[1464]: time="2025-01-30T15:42:39.915284846Z" level=info msg="Start streaming server" Jan 30 15:42:39.915424 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 15:42:39.920144 containerd[1464]: time="2025-01-30T15:42:39.919879934Z" level=info msg="containerd successfully booted in 0.058276s" Jan 30 15:42:40.177683 tar[1461]: linux-amd64/LICENSE Jan 30 15:42:40.178068 tar[1461]: linux-amd64/README.md Jan 30 15:42:40.193910 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 15:42:40.298659 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 15:42:40.336889 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 15:42:40.352053 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 15:42:40.356793 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 15:42:40.356964 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 15:42:40.369266 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 15:42:40.377980 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 15:42:40.390609 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 15:42:40.398393 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 15:42:40.401113 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 15:42:40.477838 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 30 15:42:40.482942 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 15:42:40.487359 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 15:42:40.499046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:42:40.516125 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 15:42:40.571986 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 15:42:42.358862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:42:42.375287 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:42:43.080595 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 15:42:43.097438 systemd[1]: Started sshd@0-172.24.4.139:22-172.24.4.1:49814.service - OpenSSH per-connection server daemon (172.24.4.1:49814). Jan 30 15:42:43.785419 kubelet[1548]: E0130 15:42:43.785288 1548 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:42:43.789741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:42:43.790028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:42:43.791312 systemd[1]: kubelet.service: Consumed 2.207s CPU time. Jan 30 15:42:44.768255 sshd[1555]: Accepted publickey for core from 172.24.4.1 port 49814 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:42:44.816607 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:42:44.842747 systemd-logind[1444]: New session 1 of user core. Jan 30 15:42:44.846812 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 15:42:44.861225 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 15:42:44.890956 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 15:42:44.907940 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 15:42:44.930884 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 15:42:45.089189 systemd[1562]: Queued start job for default target default.target. Jan 30 15:42:45.099408 systemd[1562]: Created slice app.slice - User Application Slice. Jan 30 15:42:45.099436 systemd[1562]: Reached target paths.target - Paths. Jan 30 15:42:45.099450 systemd[1562]: Reached target timers.target - Timers. Jan 30 15:42:45.100723 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 15:42:45.116460 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 15:42:45.117175 systemd[1562]: Reached target sockets.target - Sockets. Jan 30 15:42:45.117193 systemd[1562]: Reached target basic.target - Basic System. Jan 30 15:42:45.117231 systemd[1562]: Reached target default.target - Main User Target. Jan 30 15:42:45.117256 systemd[1562]: Startup finished in 173ms. Jan 30 15:42:45.117418 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 15:42:45.129967 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 15:42:45.457053 login[1528]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:42:45.487939 systemd-logind[1444]: New session 2 of user core. Jan 30 15:42:45.492483 login[1529]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:42:45.494903 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 15:42:45.500297 systemd-logind[1444]: New session 3 of user core. Jan 30 15:42:45.517738 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 15:42:45.569607 systemd[1]: Started sshd@1-172.24.4.139:22-172.24.4.1:47876.service - OpenSSH per-connection server daemon (172.24.4.1:47876). Jan 30 15:42:46.253959 coreos-metadata[1431]: Jan 30 15:42:46.253 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:42:46.302779 coreos-metadata[1431]: Jan 30 15:42:46.302 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 15:42:46.491750 coreos-metadata[1431]: Jan 30 15:42:46.491 INFO Fetch successful Jan 30 15:42:46.491995 coreos-metadata[1431]: Jan 30 15:42:46.491 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:42:46.505772 coreos-metadata[1431]: Jan 30 15:42:46.505 INFO Fetch successful Jan 30 15:42:46.505772 coreos-metadata[1431]: Jan 30 15:42:46.505 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 15:42:46.519709 coreos-metadata[1431]: Jan 30 15:42:46.519 INFO Fetch successful Jan 30 15:42:46.519709 coreos-metadata[1431]: Jan 30 15:42:46.519 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 15:42:46.533733 coreos-metadata[1431]: Jan 30 15:42:46.533 INFO Fetch successful Jan 30 15:42:46.533733 coreos-metadata[1431]: Jan 30 15:42:46.533 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 15:42:46.547317 coreos-metadata[1431]: Jan 30 15:42:46.547 INFO Fetch successful Jan 30 15:42:46.547317 coreos-metadata[1431]: Jan 30 15:42:46.547 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 15:42:46.562819 coreos-metadata[1431]: Jan 30 15:42:46.562 INFO Fetch successful Jan 30 15:42:46.615030 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 15:42:46.616411 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 15:42:46.761009 coreos-metadata[1499]: Jan 30 15:42:46.760 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:42:46.804039 coreos-metadata[1499]: Jan 30 15:42:46.803 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 15:42:46.816843 coreos-metadata[1499]: Jan 30 15:42:46.816 INFO Fetch successful Jan 30 15:42:46.816843 coreos-metadata[1499]: Jan 30 15:42:46.816 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 15:42:46.826791 coreos-metadata[1499]: Jan 30 15:42:46.826 INFO Fetch successful Jan 30 15:42:46.832001 unknown[1499]: wrote ssh authorized keys file for user: core Jan 30 15:42:46.879855 update-ssh-keys[1611]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:42:46.880882 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 15:42:46.884066 systemd[1]: Finished sshkeys.service. Jan 30 15:42:46.888993 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 15:42:46.889449 systemd[1]: Startup finished in 1.222s (kernel) + 15.838s (initrd) + 10.886s (userspace) = 27.946s. Jan 30 15:42:47.098035 sshd[1599]: Accepted publickey for core from 172.24.4.1 port 47876 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:42:47.100962 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:42:47.112079 systemd-logind[1444]: New session 4 of user core. Jan 30 15:42:47.122861 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 15:42:47.696608 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 30 15:42:47.707897 systemd[1]: sshd@1-172.24.4.139:22-172.24.4.1:47876.service: Deactivated successfully. Jan 30 15:42:47.711008 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 15:42:47.714146 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Jan 30 15:42:47.720096 systemd[1]: Started sshd@2-172.24.4.139:22-172.24.4.1:47878.service - OpenSSH per-connection server daemon (172.24.4.1:47878). Jan 30 15:42:47.722366 systemd-logind[1444]: Removed session 4. Jan 30 15:42:48.910370 sshd[1619]: Accepted publickey for core from 172.24.4.1 port 47878 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:42:48.913013 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:42:48.924099 systemd-logind[1444]: New session 5 of user core. Jan 30 15:42:48.930840 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 15:42:49.689248 sshd[1619]: pam_unix(sshd:session): session closed for user core Jan 30 15:42:49.699044 systemd[1]: sshd@2-172.24.4.139:22-172.24.4.1:47878.service: Deactivated successfully. Jan 30 15:42:49.701943 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 15:42:49.705852 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Jan 30 15:42:49.711161 systemd[1]: Started sshd@3-172.24.4.139:22-172.24.4.1:47882.service - OpenSSH per-connection server daemon (172.24.4.1:47882). Jan 30 15:42:49.714195 systemd-logind[1444]: Removed session 5. Jan 30 15:42:50.917718 sshd[1626]: Accepted publickey for core from 172.24.4.1 port 47882 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:42:50.920363 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:42:50.930117 systemd-logind[1444]: New session 6 of user core. Jan 30 15:42:50.940821 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 15:42:51.654446 sshd[1626]: pam_unix(sshd:session): session closed for user core Jan 30 15:42:51.664153 systemd[1]: sshd@3-172.24.4.139:22-172.24.4.1:47882.service: Deactivated successfully. Jan 30 15:42:51.667297 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 15:42:51.669115 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jan 30 15:42:51.677097 systemd[1]: Started sshd@4-172.24.4.139:22-172.24.4.1:47890.service - OpenSSH per-connection server daemon (172.24.4.1:47890). Jan 30 15:42:51.680292 systemd-logind[1444]: Removed session 6. Jan 30 15:42:52.865480 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 47890 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:42:52.868294 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:42:52.877836 systemd-logind[1444]: New session 7 of user core. Jan 30 15:42:52.889842 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 15:42:53.361666 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 15:42:53.362326 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:42:53.384686 sudo[1636]: pam_unix(sudo:session): session closed for user root Jan 30 15:42:53.604677 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 30 15:42:53.616072 systemd[1]: sshd@4-172.24.4.139:22-172.24.4.1:47890.service: Deactivated successfully. Jan 30 15:42:53.620127 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 15:42:53.625752 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jan 30 15:42:53.632129 systemd[1]: Started sshd@5-172.24.4.139:22-172.24.4.1:33366.service - OpenSSH per-connection server daemon (172.24.4.1:33366). Jan 30 15:42:53.635219 systemd-logind[1444]: Removed session 7. Jan 30 15:42:53.860314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 15:42:53.883331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:42:54.207818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:42:54.220373 (kubelet)[1651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:42:54.324003 kubelet[1651]: E0130 15:42:54.323888 1651 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:42:54.332290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:42:54.332447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:42:54.971661 sshd[1641]: Accepted publickey for core from 172.24.4.1 port 33366 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:42:54.974715 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:42:54.984234 systemd-logind[1444]: New session 8 of user core. Jan 30 15:42:54.996854 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 15:42:55.444280 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 15:42:55.444974 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:42:55.451840 sudo[1662]: pam_unix(sudo:session): session closed for user root Jan 30 15:42:55.463120 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 15:42:55.464399 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:42:55.488095 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 15:42:55.502595 auditctl[1665]: No rules Jan 30 15:42:55.503393 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 15:42:55.503835 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 15:42:55.511306 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:42:55.577609 augenrules[1683]: No rules Jan 30 15:42:55.578784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:42:55.581358 sudo[1661]: pam_unix(sudo:session): session closed for user root Jan 30 15:42:55.748916 sshd[1641]: pam_unix(sshd:session): session closed for user core Jan 30 15:42:55.758920 systemd[1]: sshd@5-172.24.4.139:22-172.24.4.1:33366.service: Deactivated successfully. Jan 30 15:42:55.762001 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 15:42:55.765826 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jan 30 15:42:55.771154 systemd[1]: Started sshd@6-172.24.4.139:22-172.24.4.1:33380.service - OpenSSH per-connection server daemon (172.24.4.1:33380). Jan 30 15:42:55.774178 systemd-logind[1444]: Removed session 8. Jan 30 15:42:57.192210 sshd[1691]: Accepted publickey for core from 172.24.4.1 port 33380 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:42:57.194416 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:42:57.204752 systemd-logind[1444]: New session 9 of user core. Jan 30 15:42:57.211808 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 15:42:57.750039 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 15:42:57.750753 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:42:58.376889 (dockerd)[1709]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 15:42:58.377161 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 15:42:58.983995 dockerd[1709]: time="2025-01-30T15:42:58.983924635Z" level=info msg="Starting up" Jan 30 15:42:59.190160 systemd[1]: var-lib-docker-metacopy\x2dcheck4205737123-merged.mount: Deactivated successfully. Jan 30 15:42:59.246291 dockerd[1709]: time="2025-01-30T15:42:59.245892606Z" level=info msg="Loading containers: start." Jan 30 15:42:59.423607 kernel: Initializing XFRM netlink socket Jan 30 15:42:59.544959 systemd-networkd[1374]: docker0: Link UP Jan 30 15:42:59.565579 dockerd[1709]: time="2025-01-30T15:42:59.564771363Z" level=info msg="Loading containers: done." Jan 30 15:42:59.587690 dockerd[1709]: time="2025-01-30T15:42:59.587262245Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 15:42:59.587690 dockerd[1709]: time="2025-01-30T15:42:59.587378092Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 15:42:59.587690 dockerd[1709]: time="2025-01-30T15:42:59.587474373Z" level=info msg="Daemon has completed initialization" Jan 30 15:42:59.656124 dockerd[1709]: time="2025-01-30T15:42:59.654759235Z" level=info msg="API listen on /run/docker.sock" Jan 30 15:42:59.656738 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 15:43:01.467090 containerd[1464]: time="2025-01-30T15:43:01.467035369Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 15:43:02.244828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055199961.mount: Deactivated successfully. Jan 30 15:43:04.140962 containerd[1464]: time="2025-01-30T15:43:04.140805434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:04.142159 containerd[1464]: time="2025-01-30T15:43:04.142104500Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 30 15:43:04.143204 containerd[1464]: time="2025-01-30T15:43:04.143161914Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:04.146273 containerd[1464]: time="2025-01-30T15:43:04.146231741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:04.147619 containerd[1464]: time="2025-01-30T15:43:04.147427664Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.680351489s" Jan 30 15:43:04.147619 containerd[1464]: time="2025-01-30T15:43:04.147472408Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 15:43:04.170170 containerd[1464]: time="2025-01-30T15:43:04.170125283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 15:43:04.360337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 15:43:04.369929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:43:04.688653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:43:04.705057 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:43:04.787458 kubelet[1919]: E0130 15:43:04.787380 1919 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:43:04.791249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:43:04.791602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:43:07.259264 containerd[1464]: time="2025-01-30T15:43:07.259177720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:07.263204 containerd[1464]: time="2025-01-30T15:43:07.263067956Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 30 15:43:07.267932 containerd[1464]: time="2025-01-30T15:43:07.267672773Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:07.275182 containerd[1464]: time="2025-01-30T15:43:07.275028599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:07.278155 containerd[1464]: time="2025-01-30T15:43:07.278070925Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 3.107895137s" Jan 30 15:43:07.279296 containerd[1464]: time="2025-01-30T15:43:07.278320473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 15:43:07.335815 containerd[1464]: time="2025-01-30T15:43:07.335730595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 15:43:08.895753 containerd[1464]: time="2025-01-30T15:43:08.895647239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:08.897214 containerd[1464]: time="2025-01-30T15:43:08.896912682Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 30 15:43:08.898421 containerd[1464]: time="2025-01-30T15:43:08.898366428Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:08.901827 containerd[1464]: time="2025-01-30T15:43:08.901767217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:08.902944 containerd[1464]: time="2025-01-30T15:43:08.902837614Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.567039091s" Jan 30 15:43:08.902944 containerd[1464]: time="2025-01-30T15:43:08.902868161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 15:43:08.924394 containerd[1464]: time="2025-01-30T15:43:08.924363426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 15:43:10.251070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274356198.mount: Deactivated successfully. Jan 30 15:43:10.741477 containerd[1464]: time="2025-01-30T15:43:10.741205563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:10.742490 containerd[1464]: time="2025-01-30T15:43:10.742284897Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 15:43:10.743519 containerd[1464]: time="2025-01-30T15:43:10.743462696Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:10.746817 containerd[1464]: time="2025-01-30T15:43:10.746774066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:10.747558 containerd[1464]: time="2025-01-30T15:43:10.747499727Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.822961594s" Jan 30 15:43:10.747558 containerd[1464]: time="2025-01-30T15:43:10.747551635Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 15:43:10.770057 containerd[1464]: time="2025-01-30T15:43:10.770031105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 15:43:11.395230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703697733.mount: Deactivated successfully. Jan 30 15:43:12.656847 containerd[1464]: time="2025-01-30T15:43:12.656775393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:12.658128 containerd[1464]: time="2025-01-30T15:43:12.658066372Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 15:43:12.659548 containerd[1464]: time="2025-01-30T15:43:12.659488478Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:12.662937 containerd[1464]: time="2025-01-30T15:43:12.662864252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:12.664562 containerd[1464]: time="2025-01-30T15:43:12.664071263Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.893889515s" Jan 30 15:43:12.664562 containerd[1464]: time="2025-01-30T15:43:12.664107040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 15:43:12.687292 containerd[1464]: time="2025-01-30T15:43:12.687243529Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 15:43:13.263691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368607844.mount: Deactivated successfully. Jan 30 15:43:13.275641 containerd[1464]: time="2025-01-30T15:43:13.275370927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:13.278813 containerd[1464]: time="2025-01-30T15:43:13.278613046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 30 15:43:13.278813 containerd[1464]: time="2025-01-30T15:43:13.278664773Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:13.287368 containerd[1464]: time="2025-01-30T15:43:13.287207706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:13.290212 containerd[1464]: time="2025-01-30T15:43:13.289410360Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 602.117719ms" Jan 30 15:43:13.290212 containerd[1464]: time="2025-01-30T15:43:13.289486524Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 15:43:13.339312 containerd[1464]: time="2025-01-30T15:43:13.339204114Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 15:43:14.003642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300895931.mount: Deactivated successfully. Jan 30 15:43:14.860170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 15:43:14.867730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:43:15.027840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:43:15.035987 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:43:15.438337 kubelet[2062]: E0130 15:43:15.438211 2062 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:43:15.441702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:43:15.442022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:43:17.163907 containerd[1464]: time="2025-01-30T15:43:17.162644375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:17.175475 containerd[1464]: time="2025-01-30T15:43:17.174585917Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 30 15:43:17.178599 containerd[1464]: time="2025-01-30T15:43:17.178071380Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:17.199839 containerd[1464]: time="2025-01-30T15:43:17.199762320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:17.203657 containerd[1464]: time="2025-01-30T15:43:17.203355436Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.864070499s" Jan 30 15:43:17.203657 containerd[1464]: time="2025-01-30T15:43:17.203447619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 15:43:21.311300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:43:21.320157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:43:21.354822 systemd[1]: Reloading requested from client PID 2140 ('systemctl') (unit session-9.scope)... Jan 30 15:43:21.354859 systemd[1]: Reloading... Jan 30 15:43:21.440593 zram_generator::config[2176]: No configuration found. Jan 30 15:43:21.592753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:43:21.678763 systemd[1]: Reloading finished in 323 ms. Jan 30 15:43:21.723064 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 15:43:21.723129 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 15:43:21.723392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:43:21.730514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:43:21.840212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:43:21.855790 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:43:21.921960 kubelet[2245]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:43:21.921960 kubelet[2245]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:43:21.921960 kubelet[2245]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:43:21.922306 kubelet[2245]: I0130 15:43:21.922045 2245 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:43:22.684653 kubelet[2245]: I0130 15:43:22.684577 2245 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 15:43:22.685640 kubelet[2245]: I0130 15:43:22.684933 2245 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:43:22.685640 kubelet[2245]: I0130 15:43:22.685456 2245 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 15:43:23.542437 kubelet[2245]: I0130 15:43:23.542347 2245 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:43:23.559856 kubelet[2245]: E0130 15:43:23.559428 2245 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.736604 kubelet[2245]: I0130 15:43:23.734988 2245 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:43:23.736604 kubelet[2245]: I0130 15:43:23.735419 2245 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:43:23.736604 kubelet[2245]: I0130 15:43:23.735471 2245 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-c-719cef3df4.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 15:43:23.736604 kubelet[2245]: I0130 15:43:23.736216 2245 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:43:23.737075 kubelet[2245]: I0130 15:43:23.736241 2245 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 15:43:23.737075 kubelet[2245]: I0130 15:43:23.736486 2245 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:43:23.739247 kubelet[2245]: I0130 15:43:23.739217 2245 kubelet.go:400] "Attempting to sync node with API server" Jan 30 15:43:23.739422 kubelet[2245]: I0130 15:43:23.739398 2245 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:43:23.739656 kubelet[2245]: I0130 15:43:23.739630 2245 kubelet.go:312] "Adding apiserver pod source" Jan 30 15:43:23.739821 kubelet[2245]: I0130 15:43:23.739798 2245 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:43:23.751033 kubelet[2245]: W0130 15:43:23.750456 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-719cef3df4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.751033 kubelet[2245]: E0130 15:43:23.750728 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-719cef3df4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.751675 kubelet[2245]: W0130 15:43:23.751589 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.751783 kubelet[2245]: E0130 15:43:23.751690 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.751862 kubelet[2245]: I0130 15:43:23.751840 2245 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:43:23.755608 kubelet[2245]: I0130 15:43:23.755501 2245 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:43:23.755728 kubelet[2245]: W0130 15:43:23.755641 2245 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 15:43:23.757384 kubelet[2245]: I0130 15:43:23.757337 2245 server.go:1264] "Started kubelet" Jan 30 15:43:23.773947 kubelet[2245]: E0130 15:43:23.773229 2245 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.139:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-c-719cef3df4.novalocal.181f82ce4d2277e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-c-719cef3df4.novalocal,UID:ci-4081-3-0-c-719cef3df4.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-c-719cef3df4.novalocal,},FirstTimestamp:2025-01-30 15:43:23.757279208 +0000 UTC m=+1.897945264,LastTimestamp:2025-01-30 15:43:23.757279208 +0000 UTC m=+1.897945264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-c-719cef3df4.novalocal,}" Jan 30 15:43:23.776617 kubelet[2245]: I0130 15:43:23.775206 2245 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:43:23.781802 kubelet[2245]: I0130 15:43:23.781733 2245 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:43:23.784080 kubelet[2245]: I0130 15:43:23.784047 2245 server.go:455] "Adding debug handlers to kubelet server" Jan 30 15:43:23.786756 kubelet[2245]: I0130 15:43:23.786671 2245 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:43:23.787259 kubelet[2245]: I0130 15:43:23.787226 2245 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:43:23.787436 kubelet[2245]: I0130 15:43:23.786799 2245 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:43:23.787956 kubelet[2245]: I0130 15:43:23.786772 2245 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 15:43:23.788314 kubelet[2245]: I0130 15:43:23.788280 2245 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:43:23.792155 kubelet[2245]: E0130 15:43:23.792085 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-719cef3df4.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="200ms" Jan 30 15:43:23.793074 kubelet[2245]: E0130 15:43:23.792956 2245 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:43:23.794433 kubelet[2245]: W0130 15:43:23.794356 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.794728 kubelet[2245]: E0130 15:43:23.794699 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.797833 kubelet[2245]: I0130 15:43:23.797794 2245 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:43:23.798006 kubelet[2245]: I0130 15:43:23.797984 2245 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:43:23.798289 kubelet[2245]: I0130 15:43:23.798248 2245 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:43:23.815113 kubelet[2245]: I0130 15:43:23.815042 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:43:23.816783 kubelet[2245]: I0130 15:43:23.816749 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:43:23.816962 kubelet[2245]: I0130 15:43:23.816939 2245 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:43:23.817142 kubelet[2245]: I0130 15:43:23.817087 2245 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 15:43:23.817362 kubelet[2245]: E0130 15:43:23.817323 2245 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:43:23.828047 kubelet[2245]: W0130 15:43:23.827987 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.828047 kubelet[2245]: E0130 15:43:23.828036 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:23.838112 kubelet[2245]: I0130 15:43:23.838071 2245 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:43:23.838112 kubelet[2245]: I0130 15:43:23.838088 2245 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:43:23.838112 kubelet[2245]: I0130 15:43:23.838103 2245 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:43:23.844971 kubelet[2245]: I0130 15:43:23.844899 2245 policy_none.go:49] "None policy: Start" Jan 30 15:43:23.849371 kubelet[2245]: I0130 15:43:23.849285 2245 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:43:23.849371 kubelet[2245]: I0130 15:43:23.849333 2245 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:43:23.862959 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 15:43:23.873409 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 15:43:23.884168 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 15:43:23.885909 kubelet[2245]: I0130 15:43:23.885859 2245 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:43:23.886073 kubelet[2245]: I0130 15:43:23.886015 2245 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:43:23.886156 kubelet[2245]: I0130 15:43:23.886110 2245 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:43:23.889592 kubelet[2245]: I0130 15:43:23.888830 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.889592 kubelet[2245]: E0130 15:43:23.889130 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.889592 kubelet[2245]: E0130 15:43:23.889445 2245 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-c-719cef3df4.novalocal\" not found" Jan 30 15:43:23.918680 kubelet[2245]: I0130 15:43:23.918461 2245 topology_manager.go:215] "Topology Admit Handler" podUID="6514f789de829c4023b8490e008de990" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.921629 kubelet[2245]: I0130 15:43:23.921291 2245 topology_manager.go:215] "Topology Admit Handler" podUID="de1bb02dd0398f20ea00bfd0949021cd" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.925261 kubelet[2245]: I0130 15:43:23.925153 2245 topology_manager.go:215] "Topology Admit Handler" podUID="5d9519b08bb9309d017310d41e8c71e5" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.940651 systemd[1]: Created slice kubepods-burstable-pod6514f789de829c4023b8490e008de990.slice - libcontainer container kubepods-burstable-pod6514f789de829c4023b8490e008de990.slice. Jan 30 15:43:23.970498 systemd[1]: Created slice kubepods-burstable-pod5d9519b08bb9309d017310d41e8c71e5.slice - libcontainer container kubepods-burstable-pod5d9519b08bb9309d017310d41e8c71e5.slice. Jan 30 15:43:23.987367 systemd[1]: Created slice kubepods-burstable-podde1bb02dd0398f20ea00bfd0949021cd.slice - libcontainer container kubepods-burstable-podde1bb02dd0398f20ea00bfd0949021cd.slice. Jan 30 15:43:23.990486 kubelet[2245]: I0130 15:43:23.990409 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6514f789de829c4023b8490e008de990-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"6514f789de829c4023b8490e008de990\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.990667 kubelet[2245]: I0130 15:43:23.990503 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6514f789de829c4023b8490e008de990-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"6514f789de829c4023b8490e008de990\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.990667 kubelet[2245]: I0130 15:43:23.990598 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6514f789de829c4023b8490e008de990-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"6514f789de829c4023b8490e008de990\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.990818 kubelet[2245]: I0130 15:43:23.990665 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:23.994673 kubelet[2245]: E0130 15:43:23.994585 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-719cef3df4.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="400ms" Jan 30 15:43:24.092119 kubelet[2245]: I0130 15:43:24.091342 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.092119 kubelet[2245]: I0130 15:43:24.091427 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.092119 kubelet[2245]: I0130 15:43:24.091486 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d9519b08bb9309d017310d41e8c71e5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"5d9519b08bb9309d017310d41e8c71e5\") " pod="kube-system/kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.092119 kubelet[2245]: I0130 15:43:24.091630 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.092463 kubelet[2245]: I0130 15:43:24.091708 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.093571 kubelet[2245]: I0130 15:43:24.092794 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.093571 kubelet[2245]: E0130 15:43:24.093300 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.260402 containerd[1464]: time="2025-01-30T15:43:24.260284940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal,Uid:6514f789de829c4023b8490e008de990,Namespace:kube-system,Attempt:0,}" Jan 30 15:43:24.286305 containerd[1464]: time="2025-01-30T15:43:24.286175743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal,Uid:5d9519b08bb9309d017310d41e8c71e5,Namespace:kube-system,Attempt:0,}" Jan 30 15:43:24.296134 containerd[1464]: time="2025-01-30T15:43:24.295575630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal,Uid:de1bb02dd0398f20ea00bfd0949021cd,Namespace:kube-system,Attempt:0,}" Jan 30 15:43:24.396506 kubelet[2245]: E0130 15:43:24.396388 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-719cef3df4.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="800ms" Jan 30 15:43:24.497182 kubelet[2245]: I0130 15:43:24.497082 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.497828 kubelet[2245]: E0130 15:43:24.497744 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:24.806604 kubelet[2245]: W0130 15:43:24.806131 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-719cef3df4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:24.806604 kubelet[2245]: E0130 15:43:24.806271 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-719cef3df4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:24.887402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147447147.mount: Deactivated successfully. Jan 30 15:43:24.897497 containerd[1464]: time="2025-01-30T15:43:24.896972395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:43:24.900078 containerd[1464]: time="2025-01-30T15:43:24.899780629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:43:24.901763 containerd[1464]: time="2025-01-30T15:43:24.901682803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:43:24.903745 containerd[1464]: time="2025-01-30T15:43:24.903664996Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:43:24.907363 containerd[1464]: time="2025-01-30T15:43:24.907141948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 15:43:24.909738 containerd[1464]: time="2025-01-30T15:43:24.909374992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:43:24.909738 containerd[1464]: time="2025-01-30T15:43:24.909582662Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:43:24.917163 containerd[1464]: time="2025-01-30T15:43:24.917059306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:43:24.921586 containerd[1464]: time="2025-01-30T15:43:24.921307125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.845714ms" Jan 30 15:43:24.926060 containerd[1464]: time="2025-01-30T15:43:24.925990954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.55384ms" Jan 30 15:43:24.928179 containerd[1464]: time="2025-01-30T15:43:24.928073606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 632.303581ms" Jan 30 15:43:24.945092 update_engine[1451]: I20250130 15:43:24.944608 1451 update_attempter.cc:509] Updating boot flags... Jan 30 15:43:25.010392 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2285) Jan 30 15:43:25.069653 kubelet[2245]: W0130 15:43:25.069250 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:25.069653 kubelet[2245]: E0130 15:43:25.069315 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:25.094647 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2289) Jan 30 15:43:25.103116 kubelet[2245]: W0130 15:43:25.102947 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:25.103116 kubelet[2245]: E0130 15:43:25.103013 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:25.172352 kubelet[2245]: W0130 15:43:25.172288 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:25.172352 kubelet[2245]: E0130 15:43:25.172356 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.139:6443: connect: connection refused Jan 30 15:43:25.183820 containerd[1464]: time="2025-01-30T15:43:25.183479231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:43:25.183820 containerd[1464]: time="2025-01-30T15:43:25.183574640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:43:25.183820 containerd[1464]: time="2025-01-30T15:43:25.183595609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:25.184509 containerd[1464]: time="2025-01-30T15:43:25.184260348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:25.190223 containerd[1464]: time="2025-01-30T15:43:25.189955655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:43:25.190223 containerd[1464]: time="2025-01-30T15:43:25.190025105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:43:25.190223 containerd[1464]: time="2025-01-30T15:43:25.190053619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:25.190223 containerd[1464]: time="2025-01-30T15:43:25.190132827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:25.193478 containerd[1464]: time="2025-01-30T15:43:25.192386271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:43:25.193478 containerd[1464]: time="2025-01-30T15:43:25.192443658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:43:25.193478 containerd[1464]: time="2025-01-30T15:43:25.192458065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:25.193478 containerd[1464]: time="2025-01-30T15:43:25.192552222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:25.197665 kubelet[2245]: E0130 15:43:25.197599 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-719cef3df4.novalocal?timeout=10s\": dial tcp 172.24.4.139:6443: connect: connection refused" interval="1.6s" Jan 30 15:43:25.217993 systemd[1]: Started cri-containerd-13ccc19c4d40492b47a6bfd12ff43272e86899f8572dfe04f2ac4c582f718c8d.scope - libcontainer container 13ccc19c4d40492b47a6bfd12ff43272e86899f8572dfe04f2ac4c582f718c8d. Jan 30 15:43:25.219104 systemd[1]: Started cri-containerd-e517c61fadc370f6a969bea4ebfe5f2cd23e278e690e89cf0795be6a44fc883a.scope - libcontainer container e517c61fadc370f6a969bea4ebfe5f2cd23e278e690e89cf0795be6a44fc883a. Jan 30 15:43:25.229740 systemd[1]: Started cri-containerd-96d0ad4a68ea801afcae6d6cbf0870cbfb93df93c8937c6a21ff15a6886ba704.scope - libcontainer container 96d0ad4a68ea801afcae6d6cbf0870cbfb93df93c8937c6a21ff15a6886ba704. Jan 30 15:43:25.292302 containerd[1464]: time="2025-01-30T15:43:25.292258323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal,Uid:5d9519b08bb9309d017310d41e8c71e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d0ad4a68ea801afcae6d6cbf0870cbfb93df93c8937c6a21ff15a6886ba704\"" Jan 30 15:43:25.298445 containerd[1464]: time="2025-01-30T15:43:25.298406741Z" level=info msg="CreateContainer within sandbox \"96d0ad4a68ea801afcae6d6cbf0870cbfb93df93c8937c6a21ff15a6886ba704\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 15:43:25.299860 kubelet[2245]: I0130 15:43:25.299842 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:25.300491 kubelet[2245]: E0130 15:43:25.300470 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.139:6443/api/v1/nodes\": dial tcp 172.24.4.139:6443: connect: connection refused" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:25.305020 containerd[1464]: time="2025-01-30T15:43:25.304985658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal,Uid:6514f789de829c4023b8490e008de990,Namespace:kube-system,Attempt:0,} returns sandbox id \"e517c61fadc370f6a969bea4ebfe5f2cd23e278e690e89cf0795be6a44fc883a\"" Jan 30 15:43:25.311310 containerd[1464]: time="2025-01-30T15:43:25.311265964Z" level=info msg="CreateContainer within sandbox \"e517c61fadc370f6a969bea4ebfe5f2cd23e278e690e89cf0795be6a44fc883a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 15:43:25.313765 containerd[1464]: time="2025-01-30T15:43:25.313724181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal,Uid:de1bb02dd0398f20ea00bfd0949021cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"13ccc19c4d40492b47a6bfd12ff43272e86899f8572dfe04f2ac4c582f718c8d\"" Jan 30 15:43:25.318991 containerd[1464]: time="2025-01-30T15:43:25.318928726Z" level=info msg="CreateContainer within sandbox \"13ccc19c4d40492b47a6bfd12ff43272e86899f8572dfe04f2ac4c582f718c8d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 15:43:25.331238 containerd[1464]: time="2025-01-30T15:43:25.330386838Z" level=info msg="CreateContainer within sandbox \"96d0ad4a68ea801afcae6d6cbf0870cbfb93df93c8937c6a21ff15a6886ba704\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36a1c901c36044c9d556780a493eaf4093c78ce28891b2fba24a95dc0cefee7e\"" Jan 30 15:43:25.331238 containerd[1464]: time="2025-01-30T15:43:25.331104997Z" level=info msg="StartContainer for \"36a1c901c36044c9d556780a493eaf4093c78ce28891b2fba24a95dc0cefee7e\"" Jan 30 15:43:25.347836 containerd[1464]: time="2025-01-30T15:43:25.347697160Z" level=info msg="CreateContainer within sandbox \"e517c61fadc370f6a969bea4ebfe5f2cd23e278e690e89cf0795be6a44fc883a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"25970824927baaedfd4b8c2b39f621ddb915d718bc91bdf0dac8ae789494401f\"" Jan 30 15:43:25.348467 containerd[1464]: time="2025-01-30T15:43:25.348395031Z" level=info msg="StartContainer for \"25970824927baaedfd4b8c2b39f621ddb915d718bc91bdf0dac8ae789494401f\"" Jan 30 15:43:25.357234 containerd[1464]: time="2025-01-30T15:43:25.357093760Z" level=info msg="CreateContainer within sandbox \"13ccc19c4d40492b47a6bfd12ff43272e86899f8572dfe04f2ac4c582f718c8d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c48f118ef77291bce9d1b4d952c27d0b8826aaf9851c0aab881283922eab45d8\"" Jan 30 15:43:25.358571 containerd[1464]: time="2025-01-30T15:43:25.357852284Z" level=info msg="StartContainer for \"c48f118ef77291bce9d1b4d952c27d0b8826aaf9851c0aab881283922eab45d8\"" Jan 30 15:43:25.371997 systemd[1]: Started cri-containerd-36a1c901c36044c9d556780a493eaf4093c78ce28891b2fba24a95dc0cefee7e.scope - libcontainer container 36a1c901c36044c9d556780a493eaf4093c78ce28891b2fba24a95dc0cefee7e. Jan 30 15:43:25.393297 systemd[1]: Started cri-containerd-25970824927baaedfd4b8c2b39f621ddb915d718bc91bdf0dac8ae789494401f.scope - libcontainer container 25970824927baaedfd4b8c2b39f621ddb915d718bc91bdf0dac8ae789494401f. Jan 30 15:43:25.414831 systemd[1]: Started cri-containerd-c48f118ef77291bce9d1b4d952c27d0b8826aaf9851c0aab881283922eab45d8.scope - libcontainer container c48f118ef77291bce9d1b4d952c27d0b8826aaf9851c0aab881283922eab45d8. Jan 30 15:43:25.460604 containerd[1464]: time="2025-01-30T15:43:25.460554484Z" level=info msg="StartContainer for \"36a1c901c36044c9d556780a493eaf4093c78ce28891b2fba24a95dc0cefee7e\" returns successfully" Jan 30 15:43:25.498404 containerd[1464]: time="2025-01-30T15:43:25.498362996Z" level=info msg="StartContainer for \"25970824927baaedfd4b8c2b39f621ddb915d718bc91bdf0dac8ae789494401f\" returns successfully" Jan 30 15:43:25.521120 containerd[1464]: time="2025-01-30T15:43:25.521041702Z" level=info msg="StartContainer for \"c48f118ef77291bce9d1b4d952c27d0b8826aaf9851c0aab881283922eab45d8\" returns successfully" Jan 30 15:43:26.902904 kubelet[2245]: I0130 15:43:26.902453 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:28.058957 kubelet[2245]: E0130 15:43:28.058908 2245 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-c-719cef3df4.novalocal\" not found" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:28.132427 kubelet[2245]: I0130 15:43:28.131826 2245 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:28.753972 kubelet[2245]: I0130 15:43:28.753925 2245 apiserver.go:52] "Watching apiserver" Jan 30 15:43:28.788596 kubelet[2245]: I0130 15:43:28.788509 2245 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:43:30.666554 systemd[1]: Reloading requested from client PID 2530 ('systemctl') (unit session-9.scope)... Jan 30 15:43:30.666835 systemd[1]: Reloading... Jan 30 15:43:30.801604 zram_generator::config[2569]: No configuration found. Jan 30 15:43:30.970965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:43:31.072181 systemd[1]: Reloading finished in 404 ms. Jan 30 15:43:31.122158 kubelet[2245]: E0130 15:43:31.121490 2245 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-3-0-c-719cef3df4.novalocal.181f82ce4d2277e8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-c-719cef3df4.novalocal,UID:ci-4081-3-0-c-719cef3df4.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-c-719cef3df4.novalocal,},FirstTimestamp:2025-01-30 15:43:23.757279208 +0000 UTC m=+1.897945264,LastTimestamp:2025-01-30 15:43:23.757279208 +0000 UTC m=+1.897945264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-c-719cef3df4.novalocal,}" Jan 30 15:43:31.121703 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:43:31.129731 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 15:43:31.130031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:43:31.130138 systemd[1]: kubelet.service: Consumed 1.458s CPU time, 115.8M memory peak, 0B memory swap peak. Jan 30 15:43:31.138102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:43:31.345707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:43:31.349520 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:43:31.406376 kubelet[2633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:43:31.406376 kubelet[2633]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:43:31.406376 kubelet[2633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:43:31.406376 kubelet[2633]: I0130 15:43:31.403318 2633 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:43:31.417575 kubelet[2633]: I0130 15:43:31.416659 2633 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 15:43:31.417575 kubelet[2633]: I0130 15:43:31.416715 2633 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:43:31.417575 kubelet[2633]: I0130 15:43:31.417207 2633 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 15:43:31.421657 kubelet[2633]: I0130 15:43:31.421616 2633 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 15:43:31.424560 kubelet[2633]: I0130 15:43:31.424385 2633 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:43:31.434158 kubelet[2633]: I0130 15:43:31.434127 2633 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:43:31.434801 kubelet[2633]: I0130 15:43:31.434452 2633 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:43:31.434801 kubelet[2633]: I0130 15:43:31.434481 2633 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-c-719cef3df4.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 15:43:31.434801 kubelet[2633]: I0130 15:43:31.434681 2633 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:43:31.434801 kubelet[2633]: I0130 15:43:31.434693 2633 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 15:43:31.434985 kubelet[2633]: I0130 15:43:31.434727 2633 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:43:31.435058 kubelet[2633]: I0130 15:43:31.435047 2633 kubelet.go:400] "Attempting to sync node with API server" Jan 30 15:43:31.435512 kubelet[2633]: I0130 15:43:31.435500 2633 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:43:31.436137 kubelet[2633]: I0130 15:43:31.435609 2633 kubelet.go:312] "Adding apiserver pod source" Jan 30 15:43:31.436137 kubelet[2633]: I0130 15:43:31.435626 2633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:43:31.437402 kubelet[2633]: I0130 15:43:31.437370 2633 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:43:31.440439 kubelet[2633]: I0130 15:43:31.440408 2633 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:43:31.440887 kubelet[2633]: I0130 15:43:31.440866 2633 server.go:1264] "Started kubelet" Jan 30 15:43:31.445880 kubelet[2633]: I0130 15:43:31.444894 2633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:43:31.446786 kubelet[2633]: I0130 15:43:31.446763 2633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:43:31.446880 kubelet[2633]: I0130 15:43:31.446868 2633 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:43:31.446978 kubelet[2633]: I0130 15:43:31.446960 2633 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:43:31.448337 kubelet[2633]: I0130 15:43:31.447922 2633 server.go:455] "Adding debug handlers to kubelet server" Jan 30 15:43:31.452850 kubelet[2633]: I0130 15:43:31.452826 2633 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 15:43:31.454819 kubelet[2633]: I0130 15:43:31.453721 2633 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:43:31.454819 kubelet[2633]: I0130 15:43:31.454002 2633 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:43:31.467378 kubelet[2633]: I0130 15:43:31.467348 2633 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:43:31.467497 kubelet[2633]: I0130 15:43:31.467447 2633 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:43:31.474517 kubelet[2633]: I0130 15:43:31.474487 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:43:31.475413 kubelet[2633]: I0130 15:43:31.475400 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:43:31.475491 kubelet[2633]: I0130 15:43:31.475482 2633 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:43:31.475579 kubelet[2633]: I0130 15:43:31.475569 2633 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 15:43:31.475678 kubelet[2633]: E0130 15:43:31.475661 2633 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:43:31.486881 kubelet[2633]: E0130 15:43:31.486720 2633 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:43:31.487146 kubelet[2633]: I0130 15:43:31.487111 2633 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:43:31.535819 kubelet[2633]: I0130 15:43:31.535783 2633 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:43:31.535819 kubelet[2633]: I0130 15:43:31.535803 2633 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:43:31.535819 kubelet[2633]: I0130 15:43:31.535820 2633 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:43:31.536001 kubelet[2633]: I0130 15:43:31.535976 2633 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 15:43:31.536001 kubelet[2633]: I0130 15:43:31.535989 2633 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 15:43:31.536054 kubelet[2633]: I0130 15:43:31.536009 2633 policy_none.go:49] "None policy: Start" Jan 30 15:43:31.536841 kubelet[2633]: I0130 15:43:31.536789 2633 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:43:31.536841 kubelet[2633]: I0130 15:43:31.536810 2633 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:43:31.536965 kubelet[2633]: I0130 15:43:31.536949 2633 state_mem.go:75] "Updated machine memory state" Jan 30 15:43:31.541488 kubelet[2633]: I0130 15:43:31.541465 2633 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:43:31.541676 kubelet[2633]: I0130 15:43:31.541633 2633 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:43:31.541816 kubelet[2633]: I0130 15:43:31.541724 2633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:43:31.564072 kubelet[2633]: I0130 15:43:31.564032 2633 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.577722 kubelet[2633]: I0130 15:43:31.576707 2633 topology_manager.go:215] "Topology Admit Handler" podUID="de1bb02dd0398f20ea00bfd0949021cd" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.577722 kubelet[2633]: I0130 15:43:31.576821 2633 topology_manager.go:215] "Topology Admit Handler" podUID="5d9519b08bb9309d017310d41e8c71e5" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.577722 kubelet[2633]: I0130 15:43:31.576873 2633 topology_manager.go:215] "Topology Admit Handler" podUID="6514f789de829c4023b8490e008de990" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.598307 kubelet[2633]: I0130 15:43:31.598065 2633 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.598307 kubelet[2633]: I0130 15:43:31.598165 2633 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.601619 kubelet[2633]: W0130 15:43:31.601324 2633 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:43:31.603558 kubelet[2633]: W0130 15:43:31.601960 2633 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:43:31.603809 kubelet[2633]: W0130 15:43:31.603795 2633 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:43:31.638681 sudo[2667]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 15:43:31.639001 sudo[2667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 15:43:31.660127 kubelet[2633]: I0130 15:43:31.659857 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6514f789de829c4023b8490e008de990-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"6514f789de829c4023b8490e008de990\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.660127 kubelet[2633]: I0130 15:43:31.659895 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6514f789de829c4023b8490e008de990-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"6514f789de829c4023b8490e008de990\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.660127 kubelet[2633]: I0130 15:43:31.659924 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.660127 kubelet[2633]: I0130 15:43:31.659966 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.660321 kubelet[2633]: I0130 15:43:31.659991 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.660321 kubelet[2633]: I0130 15:43:31.660012 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d9519b08bb9309d017310d41e8c71e5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"5d9519b08bb9309d017310d41e8c71e5\") " pod="kube-system/kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.660321 kubelet[2633]: I0130 15:43:31.660030 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6514f789de829c4023b8490e008de990-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"6514f789de829c4023b8490e008de990\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.660321 kubelet[2633]: I0130 15:43:31.660048 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:31.660423 kubelet[2633]: I0130 15:43:31.660069 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/de1bb02dd0398f20ea00bfd0949021cd-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal\" (UID: \"de1bb02dd0398f20ea00bfd0949021cd\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" Jan 30 15:43:32.195046 sudo[2667]: pam_unix(sudo:session): session closed for user root Jan 30 15:43:32.437348 kubelet[2633]: I0130 15:43:32.436706 2633 apiserver.go:52] "Watching apiserver" Jan 30 15:43:32.454656 kubelet[2633]: I0130 15:43:32.454339 2633 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:43:32.577801 kubelet[2633]: I0130 15:43:32.577430 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-c-719cef3df4.novalocal" podStartSLOduration=1.577384834 podStartE2EDuration="1.577384834s" podCreationTimestamp="2025-01-30 15:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:43:32.577044995 +0000 UTC m=+1.223052761" watchObservedRunningTime="2025-01-30 15:43:32.577384834 +0000 UTC m=+1.223392599" Jan 30 15:43:32.590112 kubelet[2633]: I0130 15:43:32.589975 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-c-719cef3df4.novalocal" podStartSLOduration=1.5899572549999998 podStartE2EDuration="1.589957255s" podCreationTimestamp="2025-01-30 15:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:43:32.589364382 +0000 UTC m=+1.235372127" watchObservedRunningTime="2025-01-30 15:43:32.589957255 +0000 UTC m=+1.235965000" Jan 30 15:43:34.622468 sudo[1694]: pam_unix(sudo:session): session closed for user root Jan 30 15:43:34.901187 sshd[1691]: pam_unix(sshd:session): session closed for user core Jan 30 15:43:34.908984 systemd[1]: sshd@6-172.24.4.139:22-172.24.4.1:33380.service: Deactivated successfully. Jan 30 15:43:34.915883 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 15:43:34.916962 systemd[1]: session-9.scope: Consumed 7.760s CPU time, 193.1M memory peak, 0B memory swap peak. Jan 30 15:43:34.922380 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jan 30 15:43:34.924816 systemd-logind[1444]: Removed session 9. Jan 30 15:43:35.212286 kubelet[2633]: I0130 15:43:35.211985 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-c-719cef3df4.novalocal" podStartSLOduration=4.211946736 podStartE2EDuration="4.211946736s" podCreationTimestamp="2025-01-30 15:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:43:32.604255847 +0000 UTC m=+1.250263582" watchObservedRunningTime="2025-01-30 15:43:35.211946736 +0000 UTC m=+3.857954571" Jan 30 15:43:43.927788 kubelet[2633]: I0130 15:43:43.927700 2633 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 15:43:43.929161 containerd[1464]: time="2025-01-30T15:43:43.928864791Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 15:43:43.929765 kubelet[2633]: I0130 15:43:43.929727 2633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 15:43:44.174998 kubelet[2633]: I0130 15:43:44.174906 2633 topology_manager.go:215] "Topology Admit Handler" podUID="b0e659cd-83e8-41b1-99bf-dc02492210d6" podNamespace="kube-system" podName="kube-proxy-qjw95" Jan 30 15:43:44.198790 systemd[1]: Created slice kubepods-besteffort-podb0e659cd_83e8_41b1_99bf_dc02492210d6.slice - libcontainer container kubepods-besteffort-podb0e659cd_83e8_41b1_99bf_dc02492210d6.slice. Jan 30 15:43:44.205088 kubelet[2633]: I0130 15:43:44.205029 2633 topology_manager.go:215] "Topology Admit Handler" podUID="db47a1b6-ab18-40b9-aa79-cb244618c46b" podNamespace="kube-system" podName="cilium-4g8bf" Jan 30 15:43:44.221371 systemd[1]: Created slice kubepods-burstable-poddb47a1b6_ab18_40b9_aa79_cb244618c46b.slice - libcontainer container kubepods-burstable-poddb47a1b6_ab18_40b9_aa79_cb244618c46b.slice. Jan 30 15:43:44.248338 kubelet[2633]: I0130 15:43:44.248306 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-etc-cni-netd\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.248599 kubelet[2633]: I0130 15:43:44.248526 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-lib-modules\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249635 kubelet[2633]: I0130 15:43:44.248795 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db47a1b6-ab18-40b9-aa79-cb244618c46b-clustermesh-secrets\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249635 kubelet[2633]: I0130 15:43:44.248825 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-host-proc-sys-kernel\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249635 kubelet[2633]: I0130 15:43:44.248856 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0e659cd-83e8-41b1-99bf-dc02492210d6-xtables-lock\") pod \"kube-proxy-qjw95\" (UID: \"b0e659cd-83e8-41b1-99bf-dc02492210d6\") " pod="kube-system/kube-proxy-qjw95" Jan 30 15:43:44.249635 kubelet[2633]: I0130 15:43:44.248877 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-config-path\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249635 kubelet[2633]: I0130 15:43:44.248895 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-host-proc-sys-net\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249915 kubelet[2633]: I0130 15:43:44.248912 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-hubble-tls\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249915 kubelet[2633]: I0130 15:43:44.248932 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpln\" (UniqueName: \"kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-kube-api-access-xfpln\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249915 kubelet[2633]: I0130 15:43:44.248955 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0e659cd-83e8-41b1-99bf-dc02492210d6-lib-modules\") pod \"kube-proxy-qjw95\" (UID: \"b0e659cd-83e8-41b1-99bf-dc02492210d6\") " pod="kube-system/kube-proxy-qjw95" Jan 30 15:43:44.249915 kubelet[2633]: I0130 15:43:44.248976 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-bpf-maps\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249915 kubelet[2633]: I0130 15:43:44.248993 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-xtables-lock\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.249915 kubelet[2633]: I0130 15:43:44.249009 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-run\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.250075 kubelet[2633]: I0130 15:43:44.249028 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-hostproc\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.250075 kubelet[2633]: I0130 15:43:44.249048 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hrrw\" (UniqueName: \"kubernetes.io/projected/b0e659cd-83e8-41b1-99bf-dc02492210d6-kube-api-access-5hrrw\") pod \"kube-proxy-qjw95\" (UID: \"b0e659cd-83e8-41b1-99bf-dc02492210d6\") " pod="kube-system/kube-proxy-qjw95" Jan 30 15:43:44.250075 kubelet[2633]: I0130 15:43:44.249066 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cni-path\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.250075 kubelet[2633]: I0130 15:43:44.249101 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0e659cd-83e8-41b1-99bf-dc02492210d6-kube-proxy\") pod \"kube-proxy-qjw95\" (UID: \"b0e659cd-83e8-41b1-99bf-dc02492210d6\") " pod="kube-system/kube-proxy-qjw95" Jan 30 15:43:44.250075 kubelet[2633]: I0130 15:43:44.249117 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-cgroup\") pod \"cilium-4g8bf\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " pod="kube-system/cilium-4g8bf" Jan 30 15:43:44.379749 kubelet[2633]: E0130 15:43:44.376474 2633 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 15:43:44.379749 kubelet[2633]: E0130 15:43:44.376503 2633 projected.go:200] Error preparing data for projected volume kube-api-access-5hrrw for pod kube-system/kube-proxy-qjw95: configmap "kube-root-ca.crt" not found Jan 30 15:43:44.379749 kubelet[2633]: E0130 15:43:44.376578 2633 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0e659cd-83e8-41b1-99bf-dc02492210d6-kube-api-access-5hrrw podName:b0e659cd-83e8-41b1-99bf-dc02492210d6 nodeName:}" failed. No retries permitted until 2025-01-30 15:43:44.876558092 +0000 UTC m=+13.522565837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5hrrw" (UniqueName: "kubernetes.io/projected/b0e659cd-83e8-41b1-99bf-dc02492210d6-kube-api-access-5hrrw") pod "kube-proxy-qjw95" (UID: "b0e659cd-83e8-41b1-99bf-dc02492210d6") : configmap "kube-root-ca.crt" not found Jan 30 15:43:44.408471 kubelet[2633]: E0130 15:43:44.408441 2633 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 15:43:44.408752 kubelet[2633]: E0130 15:43:44.408657 2633 projected.go:200] Error preparing data for projected volume kube-api-access-xfpln for pod kube-system/cilium-4g8bf: configmap "kube-root-ca.crt" not found Jan 30 15:43:44.408752 kubelet[2633]: E0130 15:43:44.408732 2633 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-kube-api-access-xfpln podName:db47a1b6-ab18-40b9-aa79-cb244618c46b nodeName:}" failed. No retries permitted until 2025-01-30 15:43:44.908712464 +0000 UTC m=+13.554720209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xfpln" (UniqueName: "kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-kube-api-access-xfpln") pod "cilium-4g8bf" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b") : configmap "kube-root-ca.crt" not found Jan 30 15:43:45.019415 kubelet[2633]: I0130 15:43:45.019307 2633 topology_manager.go:215] "Topology Admit Handler" podUID="cbe21cea-bea3-4424-92fd-86280713caac" podNamespace="kube-system" podName="cilium-operator-599987898-s69qk" Jan 30 15:43:45.036741 systemd[1]: Created slice kubepods-besteffort-podcbe21cea_bea3_4424_92fd_86280713caac.slice - libcontainer container kubepods-besteffort-podcbe21cea_bea3_4424_92fd_86280713caac.slice. Jan 30 15:43:45.055031 kubelet[2633]: I0130 15:43:45.054985 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5wnp\" (UniqueName: \"kubernetes.io/projected/cbe21cea-bea3-4424-92fd-86280713caac-kube-api-access-k5wnp\") pod \"cilium-operator-599987898-s69qk\" (UID: \"cbe21cea-bea3-4424-92fd-86280713caac\") " pod="kube-system/cilium-operator-599987898-s69qk" Jan 30 15:43:45.055155 kubelet[2633]: I0130 15:43:45.055073 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbe21cea-bea3-4424-92fd-86280713caac-cilium-config-path\") pod \"cilium-operator-599987898-s69qk\" (UID: \"cbe21cea-bea3-4424-92fd-86280713caac\") " pod="kube-system/cilium-operator-599987898-s69qk" Jan 30 15:43:45.111515 containerd[1464]: time="2025-01-30T15:43:45.110949527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjw95,Uid:b0e659cd-83e8-41b1-99bf-dc02492210d6,Namespace:kube-system,Attempt:0,}" Jan 30 15:43:45.127624 containerd[1464]: time="2025-01-30T15:43:45.127418959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4g8bf,Uid:db47a1b6-ab18-40b9-aa79-cb244618c46b,Namespace:kube-system,Attempt:0,}" Jan 30 15:43:45.198248 containerd[1464]: time="2025-01-30T15:43:45.198109742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:43:45.198753 containerd[1464]: time="2025-01-30T15:43:45.198647901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:43:45.199064 containerd[1464]: time="2025-01-30T15:43:45.198970908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:45.199689 containerd[1464]: time="2025-01-30T15:43:45.199505461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:45.209399 containerd[1464]: time="2025-01-30T15:43:45.208841558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:43:45.209399 containerd[1464]: time="2025-01-30T15:43:45.208969227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:43:45.209399 containerd[1464]: time="2025-01-30T15:43:45.209003722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:45.209399 containerd[1464]: time="2025-01-30T15:43:45.209205220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:45.230209 systemd[1]: Started cri-containerd-800ed2f564254c48248f3ebcf8a84d9344af4a033e89076757460a3909574d8c.scope - libcontainer container 800ed2f564254c48248f3ebcf8a84d9344af4a033e89076757460a3909574d8c. Jan 30 15:43:45.240710 systemd[1]: Started cri-containerd-f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281.scope - libcontainer container f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281. Jan 30 15:43:45.267821 containerd[1464]: time="2025-01-30T15:43:45.267772768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjw95,Uid:b0e659cd-83e8-41b1-99bf-dc02492210d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"800ed2f564254c48248f3ebcf8a84d9344af4a033e89076757460a3909574d8c\"" Jan 30 15:43:45.274372 containerd[1464]: time="2025-01-30T15:43:45.274270819Z" level=info msg="CreateContainer within sandbox \"800ed2f564254c48248f3ebcf8a84d9344af4a033e89076757460a3909574d8c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 15:43:45.280365 containerd[1464]: time="2025-01-30T15:43:45.280252893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4g8bf,Uid:db47a1b6-ab18-40b9-aa79-cb244618c46b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\"" Jan 30 15:43:45.283980 containerd[1464]: time="2025-01-30T15:43:45.283780452Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 15:43:45.315448 containerd[1464]: time="2025-01-30T15:43:45.315361516Z" level=info msg="CreateContainer within sandbox \"800ed2f564254c48248f3ebcf8a84d9344af4a033e89076757460a3909574d8c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d7ae20d34767dba4dac7b1bb2c5b7c1edbff39e00c49d0935e5eddb99bcedf0\"" Jan 30 15:43:45.316338 containerd[1464]: time="2025-01-30T15:43:45.316018759Z" level=info msg="StartContainer for \"5d7ae20d34767dba4dac7b1bb2c5b7c1edbff39e00c49d0935e5eddb99bcedf0\"" Jan 30 15:43:45.344315 containerd[1464]: time="2025-01-30T15:43:45.343896896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s69qk,Uid:cbe21cea-bea3-4424-92fd-86280713caac,Namespace:kube-system,Attempt:0,}" Jan 30 15:43:45.351770 systemd[1]: Started cri-containerd-5d7ae20d34767dba4dac7b1bb2c5b7c1edbff39e00c49d0935e5eddb99bcedf0.scope - libcontainer container 5d7ae20d34767dba4dac7b1bb2c5b7c1edbff39e00c49d0935e5eddb99bcedf0. Jan 30 15:43:45.398517 containerd[1464]: time="2025-01-30T15:43:45.398231491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:43:45.398517 containerd[1464]: time="2025-01-30T15:43:45.398298146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:43:45.398517 containerd[1464]: time="2025-01-30T15:43:45.398317132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:45.400598 containerd[1464]: time="2025-01-30T15:43:45.399229744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:43:45.419331 containerd[1464]: time="2025-01-30T15:43:45.419292156Z" level=info msg="StartContainer for \"5d7ae20d34767dba4dac7b1bb2c5b7c1edbff39e00c49d0935e5eddb99bcedf0\" returns successfully" Jan 30 15:43:45.437904 systemd[1]: Started cri-containerd-0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430.scope - libcontainer container 0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430. Jan 30 15:43:45.496475 containerd[1464]: time="2025-01-30T15:43:45.496360386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s69qk,Uid:cbe21cea-bea3-4424-92fd-86280713caac,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\"" Jan 30 15:43:45.571163 kubelet[2633]: I0130 15:43:45.570973 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjw95" podStartSLOduration=1.5709550540000001 podStartE2EDuration="1.570955054s" podCreationTimestamp="2025-01-30 15:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:43:45.57058497 +0000 UTC m=+14.216592755" watchObservedRunningTime="2025-01-30 15:43:45.570955054 +0000 UTC m=+14.216962799" Jan 30 15:43:53.096015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628484483.mount: Deactivated successfully. Jan 30 15:43:55.501423 containerd[1464]: time="2025-01-30T15:43:55.501357472Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:55.503195 containerd[1464]: time="2025-01-30T15:43:55.502993281Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 15:43:55.504431 containerd[1464]: time="2025-01-30T15:43:55.504370605Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:55.506298 containerd[1464]: time="2025-01-30T15:43:55.506191590Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.222370201s" Jan 30 15:43:55.506298 containerd[1464]: time="2025-01-30T15:43:55.506226736Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 15:43:55.508653 containerd[1464]: time="2025-01-30T15:43:55.508434167Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 15:43:55.510065 containerd[1464]: time="2025-01-30T15:43:55.510035060Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:43:55.529402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232693620.mount: Deactivated successfully. Jan 30 15:43:55.537232 containerd[1464]: time="2025-01-30T15:43:55.536896670Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\"" Jan 30 15:43:55.538050 containerd[1464]: time="2025-01-30T15:43:55.537674339Z" level=info msg="StartContainer for \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\"" Jan 30 15:43:55.579695 systemd[1]: Started cri-containerd-350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57.scope - libcontainer container 350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57. Jan 30 15:43:55.615173 containerd[1464]: time="2025-01-30T15:43:55.615071869Z" level=info msg="StartContainer for \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\" returns successfully" Jan 30 15:43:55.629696 systemd[1]: cri-containerd-350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57.scope: Deactivated successfully. Jan 30 15:43:56.528768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57-rootfs.mount: Deactivated successfully. Jan 30 15:43:57.077242 containerd[1464]: time="2025-01-30T15:43:57.076184503Z" level=info msg="shim disconnected" id=350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57 namespace=k8s.io Jan 30 15:43:57.077242 containerd[1464]: time="2025-01-30T15:43:57.076289911Z" level=warning msg="cleaning up after shim disconnected" id=350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57 namespace=k8s.io Jan 30 15:43:57.077242 containerd[1464]: time="2025-01-30T15:43:57.076313445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:43:57.649577 containerd[1464]: time="2025-01-30T15:43:57.646802845Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:43:57.684661 containerd[1464]: time="2025-01-30T15:43:57.684522351Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\"" Jan 30 15:43:57.687714 containerd[1464]: time="2025-01-30T15:43:57.686186924Z" level=info msg="StartContainer for \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\"" Jan 30 15:43:57.743922 systemd[1]: Started cri-containerd-e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073.scope - libcontainer container e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073. Jan 30 15:43:57.777875 containerd[1464]: time="2025-01-30T15:43:57.777771032Z" level=info msg="StartContainer for \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\" returns successfully" Jan 30 15:43:57.788441 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:43:57.789070 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:43:57.789165 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:43:57.798003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:43:57.798466 systemd[1]: cri-containerd-e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073.scope: Deactivated successfully. Jan 30 15:43:57.820622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:43:57.831334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073-rootfs.mount: Deactivated successfully. Jan 30 15:43:57.836100 containerd[1464]: time="2025-01-30T15:43:57.836032361Z" level=info msg="shim disconnected" id=e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073 namespace=k8s.io Jan 30 15:43:57.836399 containerd[1464]: time="2025-01-30T15:43:57.836242535Z" level=warning msg="cleaning up after shim disconnected" id=e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073 namespace=k8s.io Jan 30 15:43:57.836399 containerd[1464]: time="2025-01-30T15:43:57.836261180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:43:58.659327 containerd[1464]: time="2025-01-30T15:43:58.659259624Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:43:58.721656 containerd[1464]: time="2025-01-30T15:43:58.719474417Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\"" Jan 30 15:43:58.721656 containerd[1464]: time="2025-01-30T15:43:58.720863071Z" level=info msg="StartContainer for \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\"" Jan 30 15:43:58.746202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4152238774.mount: Deactivated successfully. Jan 30 15:43:58.772821 systemd[1]: Started cri-containerd-de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7.scope - libcontainer container de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7. Jan 30 15:43:58.823849 containerd[1464]: time="2025-01-30T15:43:58.823729762Z" level=info msg="StartContainer for \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\" returns successfully" Jan 30 15:43:58.827231 systemd[1]: cri-containerd-de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7.scope: Deactivated successfully. Jan 30 15:43:58.888414 containerd[1464]: time="2025-01-30T15:43:58.888216818Z" level=info msg="shim disconnected" id=de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7 namespace=k8s.io Jan 30 15:43:58.888414 containerd[1464]: time="2025-01-30T15:43:58.888299143Z" level=warning msg="cleaning up after shim disconnected" id=de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7 namespace=k8s.io Jan 30 15:43:58.888414 containerd[1464]: time="2025-01-30T15:43:58.888309652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:43:59.397462 containerd[1464]: time="2025-01-30T15:43:59.397404356Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:59.398792 containerd[1464]: time="2025-01-30T15:43:59.398638051Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 15:43:59.400333 containerd[1464]: time="2025-01-30T15:43:59.400272596Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:43:59.402140 containerd[1464]: time="2025-01-30T15:43:59.401994467Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.893526266s" Jan 30 15:43:59.402140 containerd[1464]: time="2025-01-30T15:43:59.402037197Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 15:43:59.404697 containerd[1464]: time="2025-01-30T15:43:59.404516869Z" level=info msg="CreateContainer within sandbox \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 15:43:59.428030 containerd[1464]: time="2025-01-30T15:43:59.427984800Z" level=info msg="CreateContainer within sandbox \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\"" Jan 30 15:43:59.428727 containerd[1464]: time="2025-01-30T15:43:59.428566741Z" level=info msg="StartContainer for \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\"" Jan 30 15:43:59.454690 systemd[1]: Started cri-containerd-c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846.scope - libcontainer container c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846. Jan 30 15:43:59.485461 containerd[1464]: time="2025-01-30T15:43:59.485425146Z" level=info msg="StartContainer for \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\" returns successfully" Jan 30 15:43:59.657652 containerd[1464]: time="2025-01-30T15:43:59.657311291Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:43:59.680527 containerd[1464]: time="2025-01-30T15:43:59.680475352Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\"" Jan 30 15:43:59.681872 containerd[1464]: time="2025-01-30T15:43:59.680953018Z" level=info msg="StartContainer for \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\"" Jan 30 15:43:59.688005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7-rootfs.mount: Deactivated successfully. Jan 30 15:43:59.740723 systemd[1]: Started cri-containerd-407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf.scope - libcontainer container 407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf. Jan 30 15:43:59.819264 systemd[1]: cri-containerd-407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf.scope: Deactivated successfully. Jan 30 15:43:59.821887 containerd[1464]: time="2025-01-30T15:43:59.821180187Z" level=info msg="StartContainer for \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\" returns successfully" Jan 30 15:43:59.860470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf-rootfs.mount: Deactivated successfully. Jan 30 15:44:00.284299 containerd[1464]: time="2025-01-30T15:44:00.284204715Z" level=info msg="shim disconnected" id=407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf namespace=k8s.io Jan 30 15:44:00.284299 containerd[1464]: time="2025-01-30T15:44:00.284255080Z" level=warning msg="cleaning up after shim disconnected" id=407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf namespace=k8s.io Jan 30 15:44:00.284299 containerd[1464]: time="2025-01-30T15:44:00.284265710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:44:00.664933 containerd[1464]: time="2025-01-30T15:44:00.664808877Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:44:00.686722 kubelet[2633]: I0130 15:44:00.686644 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-s69qk" podStartSLOduration=2.782174298 podStartE2EDuration="16.686623927s" podCreationTimestamp="2025-01-30 15:43:44 +0000 UTC" firstStartedPulling="2025-01-30 15:43:45.498357492 +0000 UTC m=+14.144365237" lastFinishedPulling="2025-01-30 15:43:59.402807131 +0000 UTC m=+28.048814866" observedRunningTime="2025-01-30 15:43:59.824404435 +0000 UTC m=+28.470412200" watchObservedRunningTime="2025-01-30 15:44:00.686623927 +0000 UTC m=+29.332631662" Jan 30 15:44:00.696743 containerd[1464]: time="2025-01-30T15:44:00.696699387Z" level=info msg="CreateContainer within sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\"" Jan 30 15:44:00.698583 containerd[1464]: time="2025-01-30T15:44:00.697586971Z" level=info msg="StartContainer for \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\"" Jan 30 15:44:00.733726 systemd[1]: Started cri-containerd-6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea.scope - libcontainer container 6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea. Jan 30 15:44:00.775592 containerd[1464]: time="2025-01-30T15:44:00.775463802Z" level=info msg="StartContainer for \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\" returns successfully" Jan 30 15:44:00.949786 kubelet[2633]: I0130 15:44:00.949422 2633 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 15:44:00.990564 kubelet[2633]: I0130 15:44:00.990289 2633 topology_manager.go:215] "Topology Admit Handler" podUID="88faa086-44fe-4dd0-ad80-cbce35cb149e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jdhd7" Jan 30 15:44:00.993453 kubelet[2633]: I0130 15:44:00.992556 2633 topology_manager.go:215] "Topology Admit Handler" podUID="e4e7503a-7b11-4890-b14f-dd2db69975ec" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5w9p6" Jan 30 15:44:01.002211 systemd[1]: Created slice kubepods-burstable-pod88faa086_44fe_4dd0_ad80_cbce35cb149e.slice - libcontainer container kubepods-burstable-pod88faa086_44fe_4dd0_ad80_cbce35cb149e.slice. Jan 30 15:44:01.010839 systemd[1]: Created slice kubepods-burstable-pode4e7503a_7b11_4890_b14f_dd2db69975ec.slice - libcontainer container kubepods-burstable-pode4e7503a_7b11_4890_b14f_dd2db69975ec.slice. Jan 30 15:44:01.061837 kubelet[2633]: I0130 15:44:01.061785 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4e7503a-7b11-4890-b14f-dd2db69975ec-config-volume\") pod \"coredns-7db6d8ff4d-5w9p6\" (UID: \"e4e7503a-7b11-4890-b14f-dd2db69975ec\") " pod="kube-system/coredns-7db6d8ff4d-5w9p6" Jan 30 15:44:01.061837 kubelet[2633]: I0130 15:44:01.061835 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxl9\" (UniqueName: \"kubernetes.io/projected/e4e7503a-7b11-4890-b14f-dd2db69975ec-kube-api-access-fbxl9\") pod \"coredns-7db6d8ff4d-5w9p6\" (UID: \"e4e7503a-7b11-4890-b14f-dd2db69975ec\") " pod="kube-system/coredns-7db6d8ff4d-5w9p6" Jan 30 15:44:01.062005 kubelet[2633]: I0130 15:44:01.061861 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnlnk\" (UniqueName: \"kubernetes.io/projected/88faa086-44fe-4dd0-ad80-cbce35cb149e-kube-api-access-xnlnk\") pod \"coredns-7db6d8ff4d-jdhd7\" (UID: \"88faa086-44fe-4dd0-ad80-cbce35cb149e\") " pod="kube-system/coredns-7db6d8ff4d-jdhd7" Jan 30 15:44:01.062005 kubelet[2633]: I0130 15:44:01.061885 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88faa086-44fe-4dd0-ad80-cbce35cb149e-config-volume\") pod \"coredns-7db6d8ff4d-jdhd7\" (UID: \"88faa086-44fe-4dd0-ad80-cbce35cb149e\") " pod="kube-system/coredns-7db6d8ff4d-jdhd7" Jan 30 15:44:01.311162 containerd[1464]: time="2025-01-30T15:44:01.310692601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jdhd7,Uid:88faa086-44fe-4dd0-ad80-cbce35cb149e,Namespace:kube-system,Attempt:0,}" Jan 30 15:44:01.316455 containerd[1464]: time="2025-01-30T15:44:01.316238955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5w9p6,Uid:e4e7503a-7b11-4890-b14f-dd2db69975ec,Namespace:kube-system,Attempt:0,}" Jan 30 15:44:01.703283 kubelet[2633]: I0130 15:44:01.703216 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4g8bf" podStartSLOduration=7.478948042 podStartE2EDuration="17.703182932s" podCreationTimestamp="2025-01-30 15:43:44 +0000 UTC" firstStartedPulling="2025-01-30 15:43:45.283088974 +0000 UTC m=+13.929096720" lastFinishedPulling="2025-01-30 15:43:55.507323865 +0000 UTC m=+24.153331610" observedRunningTime="2025-01-30 15:44:01.69703534 +0000 UTC m=+30.343043075" watchObservedRunningTime="2025-01-30 15:44:01.703182932 +0000 UTC m=+30.349190667" Jan 30 15:44:03.023182 systemd-networkd[1374]: cilium_host: Link UP Jan 30 15:44:03.025946 systemd-networkd[1374]: cilium_net: Link UP Jan 30 15:44:03.025954 systemd-networkd[1374]: cilium_net: Gained carrier Jan 30 15:44:03.026502 systemd-networkd[1374]: cilium_host: Gained carrier Jan 30 15:44:03.129524 systemd-networkd[1374]: cilium_vxlan: Link UP Jan 30 15:44:03.129720 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jan 30 15:44:03.165660 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jan 30 15:44:03.393839 kernel: NET: Registered PF_ALG protocol family Jan 30 15:44:03.933680 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jan 30 15:44:04.119150 systemd-networkd[1374]: lxc_health: Link UP Jan 30 15:44:04.131189 systemd-networkd[1374]: lxc_health: Gained carrier Jan 30 15:44:04.396812 systemd-networkd[1374]: lxc5cd1b0cd0b9f: Link UP Jan 30 15:44:04.402711 kernel: eth0: renamed from tmp1acce Jan 30 15:44:04.416256 systemd-networkd[1374]: lxc5cd1b0cd0b9f: Gained carrier Jan 30 15:44:04.433713 systemd-networkd[1374]: lxcfabd0b354142: Link UP Jan 30 15:44:04.442730 kernel: eth0: renamed from tmp10af5 Jan 30 15:44:04.448680 systemd-networkd[1374]: lxcfabd0b354142: Gained carrier Jan 30 15:44:04.509690 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jan 30 15:44:05.469727 systemd-networkd[1374]: lxc5cd1b0cd0b9f: Gained IPv6LL Jan 30 15:44:05.725735 systemd-networkd[1374]: lxcfabd0b354142: Gained IPv6LL Jan 30 15:44:05.917834 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 30 15:44:08.976755 containerd[1464]: time="2025-01-30T15:44:08.975897402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:44:08.977210 containerd[1464]: time="2025-01-30T15:44:08.976790498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:44:08.977210 containerd[1464]: time="2025-01-30T15:44:08.976822679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:44:08.977210 containerd[1464]: time="2025-01-30T15:44:08.976915813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:44:09.016398 systemd[1]: Started cri-containerd-1acce034d9776c0d736da6c9e946e37c179e910b4a7767b01be44586267b015c.scope - libcontainer container 1acce034d9776c0d736da6c9e946e37c179e910b4a7767b01be44586267b015c. Jan 30 15:44:09.043566 containerd[1464]: time="2025-01-30T15:44:09.043255604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:44:09.043566 containerd[1464]: time="2025-01-30T15:44:09.043327620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:44:09.043566 containerd[1464]: time="2025-01-30T15:44:09.043342087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:44:09.043566 containerd[1464]: time="2025-01-30T15:44:09.043416947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:44:09.079674 systemd[1]: Started cri-containerd-10af521ad50887da254ac82714a7a1a227e11d146bdaa72d3815fd9af4c59001.scope - libcontainer container 10af521ad50887da254ac82714a7a1a227e11d146bdaa72d3815fd9af4c59001. Jan 30 15:44:09.110287 containerd[1464]: time="2025-01-30T15:44:09.109695162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5w9p6,Uid:e4e7503a-7b11-4890-b14f-dd2db69975ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1acce034d9776c0d736da6c9e946e37c179e910b4a7767b01be44586267b015c\"" Jan 30 15:44:09.117111 containerd[1464]: time="2025-01-30T15:44:09.116930272Z" level=info msg="CreateContainer within sandbox \"1acce034d9776c0d736da6c9e946e37c179e910b4a7767b01be44586267b015c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:44:09.146343 containerd[1464]: time="2025-01-30T15:44:09.146298047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jdhd7,Uid:88faa086-44fe-4dd0-ad80-cbce35cb149e,Namespace:kube-system,Attempt:0,} returns sandbox id \"10af521ad50887da254ac82714a7a1a227e11d146bdaa72d3815fd9af4c59001\"" Jan 30 15:44:09.152573 containerd[1464]: time="2025-01-30T15:44:09.151880608Z" level=info msg="CreateContainer within sandbox \"10af521ad50887da254ac82714a7a1a227e11d146bdaa72d3815fd9af4c59001\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:44:09.158362 containerd[1464]: time="2025-01-30T15:44:09.158313473Z" level=info msg="CreateContainer within sandbox \"1acce034d9776c0d736da6c9e946e37c179e910b4a7767b01be44586267b015c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6680c899e2688730705816ab6884f7fcf7e0fd025f1668d3011a1539f362ade1\"" Jan 30 15:44:09.159025 containerd[1464]: time="2025-01-30T15:44:09.158991285Z" level=info msg="StartContainer for \"6680c899e2688730705816ab6884f7fcf7e0fd025f1668d3011a1539f362ade1\"" Jan 30 15:44:09.178796 containerd[1464]: time="2025-01-30T15:44:09.178740257Z" level=info msg="CreateContainer within sandbox \"10af521ad50887da254ac82714a7a1a227e11d146bdaa72d3815fd9af4c59001\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eef89744f515b12b5d3e1799edac06ef9b2b55e435a5cddbb53e7ba85b19afd5\"" Jan 30 15:44:09.180431 containerd[1464]: time="2025-01-30T15:44:09.180392587Z" level=info msg="StartContainer for \"eef89744f515b12b5d3e1799edac06ef9b2b55e435a5cddbb53e7ba85b19afd5\"" Jan 30 15:44:09.203739 systemd[1]: Started cri-containerd-6680c899e2688730705816ab6884f7fcf7e0fd025f1668d3011a1539f362ade1.scope - libcontainer container 6680c899e2688730705816ab6884f7fcf7e0fd025f1668d3011a1539f362ade1. Jan 30 15:44:09.233865 systemd[1]: Started cri-containerd-eef89744f515b12b5d3e1799edac06ef9b2b55e435a5cddbb53e7ba85b19afd5.scope - libcontainer container eef89744f515b12b5d3e1799edac06ef9b2b55e435a5cddbb53e7ba85b19afd5. Jan 30 15:44:09.264378 containerd[1464]: time="2025-01-30T15:44:09.263860514Z" level=info msg="StartContainer for \"6680c899e2688730705816ab6884f7fcf7e0fd025f1668d3011a1539f362ade1\" returns successfully" Jan 30 15:44:09.291932 containerd[1464]: time="2025-01-30T15:44:09.291857636Z" level=info msg="StartContainer for \"eef89744f515b12b5d3e1799edac06ef9b2b55e435a5cddbb53e7ba85b19afd5\" returns successfully" Jan 30 15:44:09.719612 kubelet[2633]: I0130 15:44:09.719446 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5w9p6" podStartSLOduration=24.719412494 podStartE2EDuration="24.719412494s" podCreationTimestamp="2025-01-30 15:43:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:44:09.716501665 +0000 UTC m=+38.362509460" watchObservedRunningTime="2025-01-30 15:44:09.719412494 +0000 UTC m=+38.365420279" Jan 30 15:44:11.024466 kubelet[2633]: I0130 15:44:11.024107 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 15:44:11.070579 kubelet[2633]: I0130 15:44:11.069894 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jdhd7" podStartSLOduration=27.06986318 podStartE2EDuration="27.06986318s" podCreationTimestamp="2025-01-30 15:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:44:09.785383173 +0000 UTC m=+38.431390908" watchObservedRunningTime="2025-01-30 15:44:11.06986318 +0000 UTC m=+39.715870965" Jan 30 15:45:14.621688 systemd[1]: Started sshd@7-172.24.4.139:22-172.24.4.1:41016.service - OpenSSH per-connection server daemon (172.24.4.1:41016). Jan 30 15:45:16.084497 sshd[4005]: Accepted publickey for core from 172.24.4.1 port 41016 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:16.088085 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:16.100909 systemd-logind[1444]: New session 10 of user core. Jan 30 15:45:16.114892 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 15:45:16.961058 sshd[4005]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:16.970214 systemd[1]: sshd@7-172.24.4.139:22-172.24.4.1:41016.service: Deactivated successfully. Jan 30 15:45:16.975494 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 15:45:16.977522 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Jan 30 15:45:16.980239 systemd-logind[1444]: Removed session 10. Jan 30 15:45:21.979391 systemd[1]: Started sshd@8-172.24.4.139:22-172.24.4.1:41028.service - OpenSSH per-connection server daemon (172.24.4.1:41028). Jan 30 15:45:23.421619 sshd[4021]: Accepted publickey for core from 172.24.4.1 port 41028 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:23.425291 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:23.437030 systemd-logind[1444]: New session 11 of user core. Jan 30 15:45:23.444875 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 15:45:24.138777 sshd[4021]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:24.146864 systemd[1]: sshd@8-172.24.4.139:22-172.24.4.1:41028.service: Deactivated successfully. Jan 30 15:45:24.150688 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 15:45:24.152714 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Jan 30 15:45:24.155253 systemd-logind[1444]: Removed session 11. Jan 30 15:45:29.160267 systemd[1]: Started sshd@9-172.24.4.139:22-172.24.4.1:34402.service - OpenSSH per-connection server daemon (172.24.4.1:34402). Jan 30 15:45:30.411162 sshd[4034]: Accepted publickey for core from 172.24.4.1 port 34402 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:30.415727 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:30.431648 systemd-logind[1444]: New session 12 of user core. Jan 30 15:45:30.436864 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 15:45:31.198081 sshd[4034]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:31.204163 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Jan 30 15:45:31.205359 systemd[1]: sshd@9-172.24.4.139:22-172.24.4.1:34402.service: Deactivated successfully. Jan 30 15:45:31.208512 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 15:45:31.211091 systemd-logind[1444]: Removed session 12. Jan 30 15:45:36.219127 systemd[1]: Started sshd@10-172.24.4.139:22-172.24.4.1:40220.service - OpenSSH per-connection server daemon (172.24.4.1:40220). Jan 30 15:45:37.512001 sshd[4050]: Accepted publickey for core from 172.24.4.1 port 40220 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:37.515947 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:37.529787 systemd-logind[1444]: New session 13 of user core. Jan 30 15:45:37.541996 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 15:45:38.317021 sshd[4050]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:38.334988 systemd[1]: sshd@10-172.24.4.139:22-172.24.4.1:40220.service: Deactivated successfully. Jan 30 15:45:38.342216 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 15:45:38.346854 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Jan 30 15:45:38.356909 systemd[1]: Started sshd@11-172.24.4.139:22-172.24.4.1:40236.service - OpenSSH per-connection server daemon (172.24.4.1:40236). Jan 30 15:45:38.360243 systemd-logind[1444]: Removed session 13. Jan 30 15:45:39.632092 sshd[4063]: Accepted publickey for core from 172.24.4.1 port 40236 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:39.633603 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:39.643926 systemd-logind[1444]: New session 14 of user core. Jan 30 15:45:39.651963 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 15:45:40.494237 sshd[4063]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:40.511623 systemd[1]: sshd@11-172.24.4.139:22-172.24.4.1:40236.service: Deactivated successfully. Jan 30 15:45:40.516865 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 15:45:40.521404 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Jan 30 15:45:40.528147 systemd[1]: Started sshd@12-172.24.4.139:22-172.24.4.1:40238.service - OpenSSH per-connection server daemon (172.24.4.1:40238). Jan 30 15:45:40.530438 systemd-logind[1444]: Removed session 14. Jan 30 15:45:41.957736 sshd[4074]: Accepted publickey for core from 172.24.4.1 port 40238 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:41.960515 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:41.971980 systemd-logind[1444]: New session 15 of user core. Jan 30 15:45:41.977865 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 15:45:42.689800 sshd[4074]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:42.695871 systemd[1]: sshd@12-172.24.4.139:22-172.24.4.1:40238.service: Deactivated successfully. Jan 30 15:45:42.702853 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 15:45:42.706890 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Jan 30 15:45:42.709001 systemd-logind[1444]: Removed session 15. Jan 30 15:45:47.710981 systemd[1]: Started sshd@13-172.24.4.139:22-172.24.4.1:56796.service - OpenSSH per-connection server daemon (172.24.4.1:56796). Jan 30 15:45:49.377601 sshd[4089]: Accepted publickey for core from 172.24.4.1 port 56796 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:49.380308 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:49.390960 systemd-logind[1444]: New session 16 of user core. Jan 30 15:45:49.400864 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 15:45:50.131902 sshd[4089]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:50.141822 systemd[1]: sshd@13-172.24.4.139:22-172.24.4.1:56796.service: Deactivated successfully. Jan 30 15:45:50.147086 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 15:45:50.149163 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Jan 30 15:45:50.158416 systemd[1]: Started sshd@14-172.24.4.139:22-172.24.4.1:56812.service - OpenSSH per-connection server daemon (172.24.4.1:56812). Jan 30 15:45:50.164877 systemd-logind[1444]: Removed session 16. Jan 30 15:45:51.350297 sshd[4101]: Accepted publickey for core from 172.24.4.1 port 56812 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:51.353608 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:51.366404 systemd-logind[1444]: New session 17 of user core. Jan 30 15:45:51.386042 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 15:45:52.119931 sshd[4101]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:52.134697 systemd[1]: sshd@14-172.24.4.139:22-172.24.4.1:56812.service: Deactivated successfully. Jan 30 15:45:52.140664 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 15:45:52.143832 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Jan 30 15:45:52.153289 systemd[1]: Started sshd@15-172.24.4.139:22-172.24.4.1:56826.service - OpenSSH per-connection server daemon (172.24.4.1:56826). Jan 30 15:45:52.157370 systemd-logind[1444]: Removed session 17. Jan 30 15:45:53.516447 sshd[4112]: Accepted publickey for core from 172.24.4.1 port 56826 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:53.519394 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:53.533753 systemd-logind[1444]: New session 18 of user core. Jan 30 15:45:53.540856 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 15:45:56.310265 sshd[4112]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:56.326336 systemd[1]: sshd@15-172.24.4.139:22-172.24.4.1:56826.service: Deactivated successfully. Jan 30 15:45:56.333745 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 15:45:56.337095 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Jan 30 15:45:56.348224 systemd[1]: Started sshd@16-172.24.4.139:22-172.24.4.1:46594.service - OpenSSH per-connection server daemon (172.24.4.1:46594). Jan 30 15:45:56.352206 systemd-logind[1444]: Removed session 18. Jan 30 15:45:57.446777 sshd[4131]: Accepted publickey for core from 172.24.4.1 port 46594 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:57.451483 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:57.463601 systemd-logind[1444]: New session 19 of user core. Jan 30 15:45:57.472856 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 15:45:58.509268 sshd[4131]: pam_unix(sshd:session): session closed for user core Jan 30 15:45:58.521529 systemd[1]: sshd@16-172.24.4.139:22-172.24.4.1:46594.service: Deactivated successfully. Jan 30 15:45:58.526524 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 15:45:58.530596 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Jan 30 15:45:58.538160 systemd[1]: Started sshd@17-172.24.4.139:22-172.24.4.1:46598.service - OpenSSH per-connection server daemon (172.24.4.1:46598). Jan 30 15:45:58.541735 systemd-logind[1444]: Removed session 19. Jan 30 15:45:59.774929 sshd[4142]: Accepted publickey for core from 172.24.4.1 port 46598 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:45:59.777846 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:45:59.790471 systemd-logind[1444]: New session 20 of user core. Jan 30 15:45:59.799880 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 15:46:00.380301 sshd[4142]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:00.388318 systemd[1]: sshd@17-172.24.4.139:22-172.24.4.1:46598.service: Deactivated successfully. Jan 30 15:46:00.394429 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 15:46:00.396988 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Jan 30 15:46:00.399404 systemd-logind[1444]: Removed session 20. Jan 30 15:46:05.403117 systemd[1]: Started sshd@18-172.24.4.139:22-172.24.4.1:34862.service - OpenSSH per-connection server daemon (172.24.4.1:34862). Jan 30 15:46:06.760808 sshd[4157]: Accepted publickey for core from 172.24.4.1 port 34862 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:46:06.763625 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:06.774917 systemd-logind[1444]: New session 21 of user core. Jan 30 15:46:06.779842 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 15:46:07.483029 sshd[4157]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:07.489903 systemd[1]: sshd@18-172.24.4.139:22-172.24.4.1:34862.service: Deactivated successfully. Jan 30 15:46:07.495613 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 15:46:07.497812 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Jan 30 15:46:07.500118 systemd-logind[1444]: Removed session 21. Jan 30 15:46:12.507093 systemd[1]: Started sshd@19-172.24.4.139:22-172.24.4.1:34878.service - OpenSSH per-connection server daemon (172.24.4.1:34878). Jan 30 15:46:14.052307 sshd[4169]: Accepted publickey for core from 172.24.4.1 port 34878 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:46:14.055186 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:14.065562 systemd-logind[1444]: New session 22 of user core. Jan 30 15:46:14.077893 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 15:46:14.827377 sshd[4169]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:14.832221 systemd[1]: sshd@19-172.24.4.139:22-172.24.4.1:34878.service: Deactivated successfully. Jan 30 15:46:14.834364 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 15:46:14.835324 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Jan 30 15:46:14.836881 systemd-logind[1444]: Removed session 22. Jan 30 15:46:19.853261 systemd[1]: Started sshd@20-172.24.4.139:22-172.24.4.1:55530.service - OpenSSH per-connection server daemon (172.24.4.1:55530). Jan 30 15:46:20.978976 sshd[4184]: Accepted publickey for core from 172.24.4.1 port 55530 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:46:20.981834 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:20.990968 systemd-logind[1444]: New session 23 of user core. Jan 30 15:46:21.003831 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 15:46:21.674817 sshd[4184]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:21.685978 systemd[1]: sshd@20-172.24.4.139:22-172.24.4.1:55530.service: Deactivated successfully. Jan 30 15:46:21.690092 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 15:46:21.693644 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Jan 30 15:46:21.702222 systemd[1]: Started sshd@21-172.24.4.139:22-172.24.4.1:55536.service - OpenSSH per-connection server daemon (172.24.4.1:55536). Jan 30 15:46:21.705741 systemd-logind[1444]: Removed session 23. Jan 30 15:46:23.153128 sshd[4197]: Accepted publickey for core from 172.24.4.1 port 55536 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:46:23.155920 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:23.169921 systemd-logind[1444]: New session 24 of user core. Jan 30 15:46:23.176947 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 15:46:26.434960 systemd[1]: run-containerd-runc-k8s.io-6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea-runc.gBpdLn.mount: Deactivated successfully. Jan 30 15:46:26.454595 containerd[1464]: time="2025-01-30T15:46:26.454457859Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:46:26.478596 containerd[1464]: time="2025-01-30T15:46:26.477410165Z" level=info msg="StopContainer for \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\" with timeout 2 (s)" Jan 30 15:46:26.479631 containerd[1464]: time="2025-01-30T15:46:26.479565140Z" level=info msg="Stop container \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\" with signal terminated" Jan 30 15:46:26.490426 systemd-networkd[1374]: lxc_health: Link DOWN Jan 30 15:46:26.490435 systemd-networkd[1374]: lxc_health: Lost carrier Jan 30 15:46:26.504839 systemd[1]: cri-containerd-6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea.scope: Deactivated successfully. Jan 30 15:46:26.505125 systemd[1]: cri-containerd-6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea.scope: Consumed 8.630s CPU time. Jan 30 15:46:26.528209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea-rootfs.mount: Deactivated successfully. Jan 30 15:46:26.616522 kubelet[2633]: E0130 15:46:26.616435 2633 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:46:26.671258 containerd[1464]: time="2025-01-30T15:46:26.670729483Z" level=info msg="StopContainer for \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\" with timeout 30 (s)" Jan 30 15:46:26.673079 containerd[1464]: time="2025-01-30T15:46:26.672800839Z" level=info msg="Stop container \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\" with signal terminated" Jan 30 15:46:26.695220 systemd[1]: cri-containerd-c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846.scope: Deactivated successfully. Jan 30 15:46:26.738500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846-rootfs.mount: Deactivated successfully. Jan 30 15:46:26.834525 containerd[1464]: time="2025-01-30T15:46:26.834408592Z" level=info msg="shim disconnected" id=6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea namespace=k8s.io Jan 30 15:46:26.834934 containerd[1464]: time="2025-01-30T15:46:26.834848721Z" level=warning msg="cleaning up after shim disconnected" id=6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea namespace=k8s.io Jan 30 15:46:26.835318 containerd[1464]: time="2025-01-30T15:46:26.834885842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:26.849699 containerd[1464]: time="2025-01-30T15:46:26.849477841Z" level=info msg="shim disconnected" id=c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846 namespace=k8s.io Jan 30 15:46:26.850123 containerd[1464]: time="2025-01-30T15:46:26.849956523Z" level=warning msg="cleaning up after shim disconnected" id=c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846 namespace=k8s.io Jan 30 15:46:26.850123 containerd[1464]: time="2025-01-30T15:46:26.850044970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:26.922263 containerd[1464]: time="2025-01-30T15:46:26.922150216Z" level=info msg="StopContainer for \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\" returns successfully" Jan 30 15:46:26.924744 containerd[1464]: time="2025-01-30T15:46:26.924667775Z" level=info msg="StopPodSandbox for \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\"" Jan 30 15:46:26.924880 containerd[1464]: time="2025-01-30T15:46:26.924753507Z" level=info msg="Container to stop \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:26.924880 containerd[1464]: time="2025-01-30T15:46:26.924789093Z" level=info msg="Container to stop \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:26.924880 containerd[1464]: time="2025-01-30T15:46:26.924819801Z" level=info msg="Container to stop \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:26.924880 containerd[1464]: time="2025-01-30T15:46:26.924849257Z" level=info msg="Container to stop \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:26.924880 containerd[1464]: time="2025-01-30T15:46:26.924875166Z" level=info msg="Container to stop \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:26.931272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281-shm.mount: Deactivated successfully. Jan 30 15:46:26.941456 containerd[1464]: time="2025-01-30T15:46:26.941347371Z" level=info msg="StopContainer for \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\" returns successfully" Jan 30 15:46:26.944620 containerd[1464]: time="2025-01-30T15:46:26.944571682Z" level=info msg="StopPodSandbox for \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\"" Jan 30 15:46:26.944896 containerd[1464]: time="2025-01-30T15:46:26.944839718Z" level=info msg="Container to stop \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:46:26.947040 systemd[1]: cri-containerd-f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281.scope: Deactivated successfully. Jan 30 15:46:26.971692 systemd[1]: cri-containerd-0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430.scope: Deactivated successfully. Jan 30 15:46:26.999627 containerd[1464]: time="2025-01-30T15:46:26.999452454Z" level=info msg="shim disconnected" id=0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430 namespace=k8s.io Jan 30 15:46:27.000145 containerd[1464]: time="2025-01-30T15:46:27.000072723Z" level=warning msg="cleaning up after shim disconnected" id=0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430 namespace=k8s.io Jan 30 15:46:27.000145 containerd[1464]: time="2025-01-30T15:46:27.000092060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:27.000684 containerd[1464]: time="2025-01-30T15:46:26.999471430Z" level=info msg="shim disconnected" id=f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281 namespace=k8s.io Jan 30 15:46:27.000737 containerd[1464]: time="2025-01-30T15:46:27.000686141Z" level=warning msg="cleaning up after shim disconnected" id=f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281 namespace=k8s.io Jan 30 15:46:27.000737 containerd[1464]: time="2025-01-30T15:46:27.000697843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:27.016348 containerd[1464]: time="2025-01-30T15:46:27.016207090Z" level=info msg="TearDown network for sandbox \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\" successfully" Jan 30 15:46:27.016348 containerd[1464]: time="2025-01-30T15:46:27.016240292Z" level=info msg="StopPodSandbox for \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\" returns successfully" Jan 30 15:46:27.020591 containerd[1464]: time="2025-01-30T15:46:27.020448328Z" level=info msg="TearDown network for sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" successfully" Jan 30 15:46:27.020591 containerd[1464]: time="2025-01-30T15:46:27.020473155Z" level=info msg="StopPodSandbox for \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" returns successfully" Jan 30 15:46:27.072302 kubelet[2633]: I0130 15:46:27.072234 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-host-proc-sys-net\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072302 kubelet[2633]: I0130 15:46:27.072303 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-hubble-tls\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072550 kubelet[2633]: I0130 15:46:27.072325 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-lib-modules\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072550 kubelet[2633]: I0130 15:46:27.072342 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-bpf-maps\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072550 kubelet[2633]: I0130 15:46:27.072360 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cni-path\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072550 kubelet[2633]: I0130 15:46:27.072378 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-host-proc-sys-kernel\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072550 kubelet[2633]: I0130 15:46:27.072397 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-cgroup\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072550 kubelet[2633]: I0130 15:46:27.072413 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-etc-cni-netd\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072733 kubelet[2633]: I0130 15:46:27.072428 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-run\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072733 kubelet[2633]: I0130 15:46:27.072447 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfpln\" (UniqueName: \"kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-kube-api-access-xfpln\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072733 kubelet[2633]: I0130 15:46:27.072463 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-xtables-lock\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072733 kubelet[2633]: I0130 15:46:27.072484 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5wnp\" (UniqueName: \"kubernetes.io/projected/cbe21cea-bea3-4424-92fd-86280713caac-kube-api-access-k5wnp\") pod \"cbe21cea-bea3-4424-92fd-86280713caac\" (UID: \"cbe21cea-bea3-4424-92fd-86280713caac\") " Jan 30 15:46:27.072733 kubelet[2633]: I0130 15:46:27.072504 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db47a1b6-ab18-40b9-aa79-cb244618c46b-clustermesh-secrets\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072733 kubelet[2633]: I0130 15:46:27.072523 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-config-path\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072917 kubelet[2633]: I0130 15:46:27.072559 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-hostproc\") pod \"db47a1b6-ab18-40b9-aa79-cb244618c46b\" (UID: \"db47a1b6-ab18-40b9-aa79-cb244618c46b\") " Jan 30 15:46:27.072917 kubelet[2633]: I0130 15:46:27.072580 2633 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbe21cea-bea3-4424-92fd-86280713caac-cilium-config-path\") pod \"cbe21cea-bea3-4424-92fd-86280713caac\" (UID: \"cbe21cea-bea3-4424-92fd-86280713caac\") " Jan 30 15:46:27.073378 kubelet[2633]: I0130 15:46:27.073126 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.073378 kubelet[2633]: I0130 15:46:27.073201 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.073471 kubelet[2633]: I0130 15:46:27.073407 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.073617 kubelet[2633]: I0130 15:46:27.073568 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.073798 kubelet[2633]: I0130 15:46:27.073694 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.073798 kubelet[2633]: I0130 15:46:27.073718 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cni-path" (OuterVolumeSpecName: "cni-path") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.073798 kubelet[2633]: I0130 15:46:27.073738 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.073798 kubelet[2633]: I0130 15:46:27.073758 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.074097 kubelet[2633]: I0130 15:46:27.073922 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.077566 kubelet[2633]: I0130 15:46:27.077503 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-hostproc" (OuterVolumeSpecName: "hostproc") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:46:27.078220 kubelet[2633]: I0130 15:46:27.077854 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:46:27.079228 kubelet[2633]: I0130 15:46:27.078646 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-kube-api-access-xfpln" (OuterVolumeSpecName: "kube-api-access-xfpln") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "kube-api-access-xfpln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:46:27.080240 kubelet[2633]: I0130 15:46:27.080204 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:46:27.080395 kubelet[2633]: I0130 15:46:27.080366 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbe21cea-bea3-4424-92fd-86280713caac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cbe21cea-bea3-4424-92fd-86280713caac" (UID: "cbe21cea-bea3-4424-92fd-86280713caac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:46:27.082780 kubelet[2633]: I0130 15:46:27.082579 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db47a1b6-ab18-40b9-aa79-cb244618c46b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "db47a1b6-ab18-40b9-aa79-cb244618c46b" (UID: "db47a1b6-ab18-40b9-aa79-cb244618c46b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:46:27.082998 kubelet[2633]: I0130 15:46:27.082888 2633 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe21cea-bea3-4424-92fd-86280713caac-kube-api-access-k5wnp" (OuterVolumeSpecName: "kube-api-access-k5wnp") pod "cbe21cea-bea3-4424-92fd-86280713caac" (UID: "cbe21cea-bea3-4424-92fd-86280713caac"). InnerVolumeSpecName "kube-api-access-k5wnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:46:27.173928 kubelet[2633]: I0130 15:46:27.173658 2633 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-config-path\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.173928 kubelet[2633]: I0130 15:46:27.173698 2633 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-hostproc\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.173928 kubelet[2633]: I0130 15:46:27.173714 2633 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbe21cea-bea3-4424-92fd-86280713caac-cilium-config-path\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.173928 kubelet[2633]: I0130 15:46:27.173726 2633 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-host-proc-sys-net\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.173928 kubelet[2633]: I0130 15:46:27.173737 2633 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-hubble-tls\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.173928 kubelet[2633]: I0130 15:46:27.173761 2633 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-lib-modules\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.173928 kubelet[2633]: I0130 15:46:27.173770 2633 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-bpf-maps\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174264 kubelet[2633]: I0130 15:46:27.173780 2633 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cni-path\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174264 kubelet[2633]: I0130 15:46:27.173790 2633 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-host-proc-sys-kernel\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174264 kubelet[2633]: I0130 15:46:27.173801 2633 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-cgroup\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174264 kubelet[2633]: I0130 15:46:27.173810 2633 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-etc-cni-netd\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174264 kubelet[2633]: I0130 15:46:27.173820 2633 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-cilium-run\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174264 kubelet[2633]: I0130 15:46:27.173831 2633 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xfpln\" (UniqueName: \"kubernetes.io/projected/db47a1b6-ab18-40b9-aa79-cb244618c46b-kube-api-access-xfpln\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174264 kubelet[2633]: I0130 15:46:27.173846 2633 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db47a1b6-ab18-40b9-aa79-cb244618c46b-xtables-lock\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174436 kubelet[2633]: I0130 15:46:27.173857 2633 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k5wnp\" (UniqueName: \"kubernetes.io/projected/cbe21cea-bea3-4424-92fd-86280713caac-kube-api-access-k5wnp\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.174436 kubelet[2633]: I0130 15:46:27.173869 2633 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db47a1b6-ab18-40b9-aa79-cb244618c46b-clustermesh-secrets\") on node \"ci-4081-3-0-c-719cef3df4.novalocal\" DevicePath \"\"" Jan 30 15:46:27.192597 kubelet[2633]: I0130 15:46:27.192300 2633 scope.go:117] "RemoveContainer" containerID="6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea" Jan 30 15:46:27.196146 containerd[1464]: time="2025-01-30T15:46:27.196064065Z" level=info msg="RemoveContainer for \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\"" Jan 30 15:46:27.203840 systemd[1]: Removed slice kubepods-burstable-poddb47a1b6_ab18_40b9_aa79_cb244618c46b.slice - libcontainer container kubepods-burstable-poddb47a1b6_ab18_40b9_aa79_cb244618c46b.slice. Jan 30 15:46:27.204699 systemd[1]: kubepods-burstable-poddb47a1b6_ab18_40b9_aa79_cb244618c46b.slice: Consumed 8.727s CPU time. Jan 30 15:46:27.208427 systemd[1]: Removed slice kubepods-besteffort-podcbe21cea_bea3_4424_92fd_86280713caac.slice - libcontainer container kubepods-besteffort-podcbe21cea_bea3_4424_92fd_86280713caac.slice. Jan 30 15:46:27.418218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430-rootfs.mount: Deactivated successfully. Jan 30 15:46:27.418802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430-shm.mount: Deactivated successfully. Jan 30 15:46:27.419066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281-rootfs.mount: Deactivated successfully. Jan 30 15:46:27.419244 systemd[1]: var-lib-kubelet-pods-cbe21cea\x2dbea3\x2d4424\x2d92fd\x2d86280713caac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk5wnp.mount: Deactivated successfully. Jan 30 15:46:27.419479 systemd[1]: var-lib-kubelet-pods-db47a1b6\x2dab18\x2d40b9\x2daa79\x2dcb244618c46b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxfpln.mount: Deactivated successfully. Jan 30 15:46:27.419693 systemd[1]: var-lib-kubelet-pods-db47a1b6\x2dab18\x2d40b9\x2daa79\x2dcb244618c46b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 15:46:27.419851 systemd[1]: var-lib-kubelet-pods-db47a1b6\x2dab18\x2d40b9\x2daa79\x2dcb244618c46b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 15:46:27.430967 containerd[1464]: time="2025-01-30T15:46:27.429772233Z" level=info msg="RemoveContainer for \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\" returns successfully" Jan 30 15:46:27.432277 kubelet[2633]: I0130 15:46:27.432000 2633 scope.go:117] "RemoveContainer" containerID="407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf" Jan 30 15:46:27.439917 containerd[1464]: time="2025-01-30T15:46:27.439820513Z" level=info msg="RemoveContainer for \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\"" Jan 30 15:46:27.485614 kubelet[2633]: I0130 15:46:27.484729 2633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db47a1b6-ab18-40b9-aa79-cb244618c46b" path="/var/lib/kubelet/pods/db47a1b6-ab18-40b9-aa79-cb244618c46b/volumes" Jan 30 15:46:27.672450 containerd[1464]: time="2025-01-30T15:46:27.672349908Z" level=info msg="RemoveContainer for \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\" returns successfully" Jan 30 15:46:27.673163 kubelet[2633]: I0130 15:46:27.672941 2633 scope.go:117] "RemoveContainer" containerID="de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7" Jan 30 15:46:27.676124 containerd[1464]: time="2025-01-30T15:46:27.675951169Z" level=info msg="RemoveContainer for \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\"" Jan 30 15:46:27.920971 containerd[1464]: time="2025-01-30T15:46:27.920910005Z" level=info msg="RemoveContainer for \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\" returns successfully" Jan 30 15:46:27.921599 kubelet[2633]: I0130 15:46:27.921323 2633 scope.go:117] "RemoveContainer" containerID="e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073" Jan 30 15:46:27.925339 containerd[1464]: time="2025-01-30T15:46:27.924392071Z" level=info msg="RemoveContainer for \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\"" Jan 30 15:46:27.949459 containerd[1464]: time="2025-01-30T15:46:27.949349186Z" level=info msg="RemoveContainer for \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\" returns successfully" Jan 30 15:46:27.949945 kubelet[2633]: I0130 15:46:27.949849 2633 scope.go:117] "RemoveContainer" containerID="350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57" Jan 30 15:46:27.952041 containerd[1464]: time="2025-01-30T15:46:27.951976420Z" level=info msg="RemoveContainer for \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\"" Jan 30 15:46:27.967500 containerd[1464]: time="2025-01-30T15:46:27.967392612Z" level=info msg="RemoveContainer for \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\" returns successfully" Jan 30 15:46:27.968198 kubelet[2633]: I0130 15:46:27.967853 2633 scope.go:117] "RemoveContainer" containerID="6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea" Jan 30 15:46:27.968798 containerd[1464]: time="2025-01-30T15:46:27.968627551Z" level=error msg="ContainerStatus for \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\": not found" Jan 30 15:46:27.969116 kubelet[2633]: E0130 15:46:27.968996 2633 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\": not found" containerID="6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea" Jan 30 15:46:27.969306 kubelet[2633]: I0130 15:46:27.969139 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea"} err="failed to get container status \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bca1465e9a87786527266b8783dbc4f18814e923c187db1c33a1491654150ea\": not found" Jan 30 15:46:27.969306 kubelet[2633]: I0130 15:46:27.969287 2633 scope.go:117] "RemoveContainer" containerID="407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf" Jan 30 15:46:27.969854 containerd[1464]: time="2025-01-30T15:46:27.969779874Z" level=error msg="ContainerStatus for \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\": not found" Jan 30 15:46:27.970484 kubelet[2633]: E0130 15:46:27.970026 2633 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\": not found" containerID="407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf" Jan 30 15:46:27.970484 kubelet[2633]: I0130 15:46:27.970081 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf"} err="failed to get container status \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"407dca8227d6e0fdb7a10ae79f3be9c51e5edfad04ae63fe4d1a0dabb6a657cf\": not found" Jan 30 15:46:27.970484 kubelet[2633]: I0130 15:46:27.970117 2633 scope.go:117] "RemoveContainer" containerID="de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7" Jan 30 15:46:27.971082 containerd[1464]: time="2025-01-30T15:46:27.970882764Z" level=error msg="ContainerStatus for \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\": not found" Jan 30 15:46:27.971487 kubelet[2633]: E0130 15:46:27.971194 2633 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\": not found" containerID="de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7" Jan 30 15:46:27.971487 kubelet[2633]: I0130 15:46:27.971242 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7"} err="failed to get container status \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"de258c1e1f5c2714e9f0fae00d074d436e2a6093eccebe8724afc9b0f1ec91f7\": not found" Jan 30 15:46:27.971487 kubelet[2633]: I0130 15:46:27.971276 2633 scope.go:117] "RemoveContainer" containerID="e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073" Jan 30 15:46:27.971801 containerd[1464]: time="2025-01-30T15:46:27.971650081Z" level=error msg="ContainerStatus for \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\": not found" Jan 30 15:46:27.972460 kubelet[2633]: E0130 15:46:27.972133 2633 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\": not found" containerID="e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073" Jan 30 15:46:27.972460 kubelet[2633]: I0130 15:46:27.972195 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073"} err="failed to get container status \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6b488d039eb4c1e2f45831c84042b41cb2fe221d4bfcc2bb0ac4c3a844d3073\": not found" Jan 30 15:46:27.972460 kubelet[2633]: I0130 15:46:27.972242 2633 scope.go:117] "RemoveContainer" containerID="350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57" Jan 30 15:46:27.974341 containerd[1464]: time="2025-01-30T15:46:27.973676282Z" level=error msg="ContainerStatus for \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\": not found" Jan 30 15:46:27.974616 kubelet[2633]: E0130 15:46:27.974071 2633 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\": not found" containerID="350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57" Jan 30 15:46:27.974616 kubelet[2633]: I0130 15:46:27.974124 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57"} err="failed to get container status \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\": rpc error: code = NotFound desc = an error occurred when try to find container \"350433133369bed34876959ba86d96d869a2d3dec1477160796f5185591f0a57\": not found" Jan 30 15:46:27.974616 kubelet[2633]: I0130 15:46:27.974172 2633 scope.go:117] "RemoveContainer" containerID="c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846" Jan 30 15:46:27.977337 containerd[1464]: time="2025-01-30T15:46:27.977045255Z" level=info msg="RemoveContainer for \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\"" Jan 30 15:46:28.002303 containerd[1464]: time="2025-01-30T15:46:28.002234307Z" level=info msg="RemoveContainer for \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\" returns successfully" Jan 30 15:46:28.003033 kubelet[2633]: I0130 15:46:28.002662 2633 scope.go:117] "RemoveContainer" containerID="c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846" Jan 30 15:46:28.003109 containerd[1464]: time="2025-01-30T15:46:28.003061787Z" level=error msg="ContainerStatus for \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\": not found" Jan 30 15:46:28.003399 kubelet[2633]: E0130 15:46:28.003346 2633 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\": not found" containerID="c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846" Jan 30 15:46:28.003585 kubelet[2633]: I0130 15:46:28.003402 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846"} err="failed to get container status \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3574f26872b5298821e9f5fb74f0a4ae20dfe87ed92b703b726f4819d774846\": not found" Jan 30 15:46:28.015960 sshd[4197]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:28.026347 systemd[1]: sshd@21-172.24.4.139:22-172.24.4.1:55536.service: Deactivated successfully. Jan 30 15:46:28.031772 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 15:46:28.032297 systemd[1]: session-24.scope: Consumed 1.259s CPU time. Jan 30 15:46:28.036081 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Jan 30 15:46:28.042367 systemd[1]: Started sshd@22-172.24.4.139:22-172.24.4.1:47982.service - OpenSSH per-connection server daemon (172.24.4.1:47982). Jan 30 15:46:28.046019 systemd-logind[1444]: Removed session 24. Jan 30 15:46:29.219974 sshd[4362]: Accepted publickey for core from 172.24.4.1 port 47982 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:46:29.223745 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:29.235868 systemd-logind[1444]: New session 25 of user core. Jan 30 15:46:29.244043 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 15:46:29.482627 kubelet[2633]: I0130 15:46:29.481901 2633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe21cea-bea3-4424-92fd-86280713caac" path="/var/lib/kubelet/pods/cbe21cea-bea3-4424-92fd-86280713caac/volumes" Jan 30 15:46:30.339563 kubelet[2633]: I0130 15:46:30.338587 2633 topology_manager.go:215] "Topology Admit Handler" podUID="ed68017a-ed9c-4430-bfa5-c5d66b59d3d7" podNamespace="kube-system" podName="cilium-r8fr6" Jan 30 15:46:30.339563 kubelet[2633]: E0130 15:46:30.338690 2633 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db47a1b6-ab18-40b9-aa79-cb244618c46b" containerName="mount-cgroup" Jan 30 15:46:30.339563 kubelet[2633]: E0130 15:46:30.338704 2633 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db47a1b6-ab18-40b9-aa79-cb244618c46b" containerName="mount-bpf-fs" Jan 30 15:46:30.339563 kubelet[2633]: E0130 15:46:30.338713 2633 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cbe21cea-bea3-4424-92fd-86280713caac" containerName="cilium-operator" Jan 30 15:46:30.339563 kubelet[2633]: E0130 15:46:30.338903 2633 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db47a1b6-ab18-40b9-aa79-cb244618c46b" containerName="clean-cilium-state" Jan 30 15:46:30.339563 kubelet[2633]: E0130 15:46:30.338916 2633 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db47a1b6-ab18-40b9-aa79-cb244618c46b" containerName="cilium-agent" Jan 30 15:46:30.339563 kubelet[2633]: E0130 15:46:30.338926 2633 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db47a1b6-ab18-40b9-aa79-cb244618c46b" containerName="apply-sysctl-overwrites" Jan 30 15:46:30.339563 kubelet[2633]: I0130 15:46:30.338995 2633 memory_manager.go:354] "RemoveStaleState removing state" podUID="db47a1b6-ab18-40b9-aa79-cb244618c46b" containerName="cilium-agent" Jan 30 15:46:30.339563 kubelet[2633]: I0130 15:46:30.339003 2633 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe21cea-bea3-4424-92fd-86280713caac" containerName="cilium-operator" Jan 30 15:46:30.357941 systemd[1]: Created slice kubepods-burstable-poded68017a_ed9c_4430_bfa5_c5d66b59d3d7.slice - libcontainer container kubepods-burstable-poded68017a_ed9c_4430_bfa5_c5d66b59d3d7.slice. Jan 30 15:46:30.398361 kubelet[2633]: I0130 15:46:30.398314 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-xtables-lock\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398361 kubelet[2633]: I0130 15:46:30.398362 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-cilium-ipsec-secrets\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398361 kubelet[2633]: I0130 15:46:30.398382 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-cilium-run\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398676 kubelet[2633]: I0130 15:46:30.398403 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-cilium-config-path\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398676 kubelet[2633]: I0130 15:46:30.398424 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-hubble-tls\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398676 kubelet[2633]: I0130 15:46:30.398444 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-cilium-cgroup\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398676 kubelet[2633]: I0130 15:46:30.398473 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-lib-modules\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398676 kubelet[2633]: I0130 15:46:30.398499 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-clustermesh-secrets\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398676 kubelet[2633]: I0130 15:46:30.398520 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97wql\" (UniqueName: \"kubernetes.io/projected/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-kube-api-access-97wql\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398832 kubelet[2633]: I0130 15:46:30.398566 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-etc-cni-netd\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398832 kubelet[2633]: I0130 15:46:30.398597 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-host-proc-sys-net\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398832 kubelet[2633]: I0130 15:46:30.398621 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-bpf-maps\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398832 kubelet[2633]: I0130 15:46:30.398638 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-cni-path\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398832 kubelet[2633]: I0130 15:46:30.398682 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-hostproc\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.398832 kubelet[2633]: I0130 15:46:30.398710 2633 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed68017a-ed9c-4430-bfa5-c5d66b59d3d7-host-proc-sys-kernel\") pod \"cilium-r8fr6\" (UID: \"ed68017a-ed9c-4430-bfa5-c5d66b59d3d7\") " pod="kube-system/cilium-r8fr6" Jan 30 15:46:30.458979 sshd[4362]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:30.468202 systemd[1]: sshd@22-172.24.4.139:22-172.24.4.1:47982.service: Deactivated successfully. Jan 30 15:46:30.469947 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 15:46:30.472575 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Jan 30 15:46:30.477843 systemd[1]: Started sshd@23-172.24.4.139:22-172.24.4.1:47996.service - OpenSSH per-connection server daemon (172.24.4.1:47996). Jan 30 15:46:30.480094 systemd-logind[1444]: Removed session 25. Jan 30 15:46:30.665371 containerd[1464]: time="2025-01-30T15:46:30.665295348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8fr6,Uid:ed68017a-ed9c-4430-bfa5-c5d66b59d3d7,Namespace:kube-system,Attempt:0,}" Jan 30 15:46:30.720991 containerd[1464]: time="2025-01-30T15:46:30.720172986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:46:30.720991 containerd[1464]: time="2025-01-30T15:46:30.720300106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:46:30.720991 containerd[1464]: time="2025-01-30T15:46:30.720365859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:30.720991 containerd[1464]: time="2025-01-30T15:46:30.720603238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:46:30.767811 systemd[1]: Started cri-containerd-7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04.scope - libcontainer container 7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04. Jan 30 15:46:30.796304 containerd[1464]: time="2025-01-30T15:46:30.796243812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8fr6,Uid:ed68017a-ed9c-4430-bfa5-c5d66b59d3d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\"" Jan 30 15:46:30.801861 containerd[1464]: time="2025-01-30T15:46:30.801703677Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:46:30.843529 containerd[1464]: time="2025-01-30T15:46:30.843460516Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"67f95d11f285e307a9b0ebd804a2e3752b460ae5321d61103f9d13ea979d5c64\"" Jan 30 15:46:30.845366 containerd[1464]: time="2025-01-30T15:46:30.844447096Z" level=info msg="StartContainer for \"67f95d11f285e307a9b0ebd804a2e3752b460ae5321d61103f9d13ea979d5c64\"" Jan 30 15:46:30.887022 systemd[1]: Started cri-containerd-67f95d11f285e307a9b0ebd804a2e3752b460ae5321d61103f9d13ea979d5c64.scope - libcontainer container 67f95d11f285e307a9b0ebd804a2e3752b460ae5321d61103f9d13ea979d5c64. Jan 30 15:46:30.930999 containerd[1464]: time="2025-01-30T15:46:30.930857297Z" level=info msg="StartContainer for \"67f95d11f285e307a9b0ebd804a2e3752b460ae5321d61103f9d13ea979d5c64\" returns successfully" Jan 30 15:46:30.938486 systemd[1]: cri-containerd-67f95d11f285e307a9b0ebd804a2e3752b460ae5321d61103f9d13ea979d5c64.scope: Deactivated successfully. Jan 30 15:46:30.989732 containerd[1464]: time="2025-01-30T15:46:30.989407911Z" level=info msg="shim disconnected" id=67f95d11f285e307a9b0ebd804a2e3752b460ae5321d61103f9d13ea979d5c64 namespace=k8s.io Jan 30 15:46:30.989732 containerd[1464]: time="2025-01-30T15:46:30.989491387Z" level=warning msg="cleaning up after shim disconnected" id=67f95d11f285e307a9b0ebd804a2e3752b460ae5321d61103f9d13ea979d5c64 namespace=k8s.io Jan 30 15:46:30.989732 containerd[1464]: time="2025-01-30T15:46:30.989506156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:31.231706 containerd[1464]: time="2025-01-30T15:46:31.231295214Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:46:31.258369 containerd[1464]: time="2025-01-30T15:46:31.258095511Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c21b8dc8f470e8d935415be4d46de7e155701769f58b3075c524fac3c7504d95\"" Jan 30 15:46:31.262789 containerd[1464]: time="2025-01-30T15:46:31.261017880Z" level=info msg="StartContainer for \"c21b8dc8f470e8d935415be4d46de7e155701769f58b3075c524fac3c7504d95\"" Jan 30 15:46:31.326970 systemd[1]: Started cri-containerd-c21b8dc8f470e8d935415be4d46de7e155701769f58b3075c524fac3c7504d95.scope - libcontainer container c21b8dc8f470e8d935415be4d46de7e155701769f58b3075c524fac3c7504d95. Jan 30 15:46:31.368551 containerd[1464]: time="2025-01-30T15:46:31.368479339Z" level=info msg="StartContainer for \"c21b8dc8f470e8d935415be4d46de7e155701769f58b3075c524fac3c7504d95\" returns successfully" Jan 30 15:46:31.372213 systemd[1]: cri-containerd-c21b8dc8f470e8d935415be4d46de7e155701769f58b3075c524fac3c7504d95.scope: Deactivated successfully. Jan 30 15:46:31.409487 containerd[1464]: time="2025-01-30T15:46:31.409377395Z" level=info msg="shim disconnected" id=c21b8dc8f470e8d935415be4d46de7e155701769f58b3075c524fac3c7504d95 namespace=k8s.io Jan 30 15:46:31.409487 containerd[1464]: time="2025-01-30T15:46:31.409468266Z" level=warning msg="cleaning up after shim disconnected" id=c21b8dc8f470e8d935415be4d46de7e155701769f58b3075c524fac3c7504d95 namespace=k8s.io Jan 30 15:46:31.409487 containerd[1464]: time="2025-01-30T15:46:31.409480309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:31.503058 containerd[1464]: time="2025-01-30T15:46:31.502640063Z" level=info msg="StopPodSandbox for \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\"" Jan 30 15:46:31.503058 containerd[1464]: time="2025-01-30T15:46:31.502805305Z" level=info msg="TearDown network for sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" successfully" Jan 30 15:46:31.503058 containerd[1464]: time="2025-01-30T15:46:31.502830452Z" level=info msg="StopPodSandbox for \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" returns successfully" Jan 30 15:46:31.506580 containerd[1464]: time="2025-01-30T15:46:31.504629624Z" level=info msg="RemovePodSandbox for \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\"" Jan 30 15:46:31.506580 containerd[1464]: time="2025-01-30T15:46:31.504704295Z" level=info msg="Forcibly stopping sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\"" Jan 30 15:46:31.506580 containerd[1464]: time="2025-01-30T15:46:31.504792582Z" level=info msg="TearDown network for sandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" successfully" Jan 30 15:46:31.517040 containerd[1464]: time="2025-01-30T15:46:31.516939883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:46:31.517348 containerd[1464]: time="2025-01-30T15:46:31.517313327Z" level=info msg="RemovePodSandbox \"f92cf746d4854addd5624bea67eec07448124603cc9ce195f44a777addebd281\" returns successfully" Jan 30 15:46:31.518400 containerd[1464]: time="2025-01-30T15:46:31.518362625Z" level=info msg="StopPodSandbox for \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\"" Jan 30 15:46:31.518742 containerd[1464]: time="2025-01-30T15:46:31.518709759Z" level=info msg="TearDown network for sandbox \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\" successfully" Jan 30 15:46:31.518868 containerd[1464]: time="2025-01-30T15:46:31.518841709Z" level=info msg="StopPodSandbox for \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\" returns successfully" Jan 30 15:46:31.519637 containerd[1464]: time="2025-01-30T15:46:31.519530797Z" level=info msg="RemovePodSandbox for \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\"" Jan 30 15:46:31.519750 containerd[1464]: time="2025-01-30T15:46:31.519652768Z" level=info msg="Forcibly stopping sandbox \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\"" Jan 30 15:46:31.519840 containerd[1464]: time="2025-01-30T15:46:31.519789736Z" level=info msg="TearDown network for sandbox \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\" successfully" Jan 30 15:46:31.525798 containerd[1464]: time="2025-01-30T15:46:31.525714616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:46:31.525932 containerd[1464]: time="2025-01-30T15:46:31.525831907Z" level=info msg="RemovePodSandbox \"0a52a62e0bfcf0d2c15d8efa02af942e872c5323732717090adb296a14cb5430\" returns successfully" Jan 30 15:46:31.618077 kubelet[2633]: E0130 15:46:31.617872 2633 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:46:31.764807 sshd[4374]: Accepted publickey for core from 172.24.4.1 port 47996 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:46:31.768333 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:31.778779 systemd-logind[1444]: New session 26 of user core. Jan 30 15:46:31.788845 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 15:46:32.250796 containerd[1464]: time="2025-01-30T15:46:32.250491797Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:46:32.287155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788405555.mount: Deactivated successfully. Jan 30 15:46:32.306624 containerd[1464]: time="2025-01-30T15:46:32.306471188Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7\"" Jan 30 15:46:32.307325 containerd[1464]: time="2025-01-30T15:46:32.307232985Z" level=info msg="StartContainer for \"6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7\"" Jan 30 15:46:32.360046 systemd[1]: Started cri-containerd-6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7.scope - libcontainer container 6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7. Jan 30 15:46:32.408718 systemd[1]: cri-containerd-6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7.scope: Deactivated successfully. Jan 30 15:46:32.413455 containerd[1464]: time="2025-01-30T15:46:32.412876092Z" level=info msg="StartContainer for \"6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7\" returns successfully" Jan 30 15:46:32.445221 sshd[4374]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:32.447182 containerd[1464]: time="2025-01-30T15:46:32.446998089Z" level=info msg="shim disconnected" id=6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7 namespace=k8s.io Jan 30 15:46:32.447182 containerd[1464]: time="2025-01-30T15:46:32.447053663Z" level=warning msg="cleaning up after shim disconnected" id=6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7 namespace=k8s.io Jan 30 15:46:32.447182 containerd[1464]: time="2025-01-30T15:46:32.447063001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:32.452356 systemd[1]: sshd@23-172.24.4.139:22-172.24.4.1:47996.service: Deactivated successfully. Jan 30 15:46:32.455859 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 15:46:32.460254 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Jan 30 15:46:32.464011 systemd[1]: Started sshd@24-172.24.4.139:22-172.24.4.1:48012.service - OpenSSH per-connection server daemon (172.24.4.1:48012). Jan 30 15:46:32.466585 systemd-logind[1444]: Removed session 26. Jan 30 15:46:32.508701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6226d7a8313e65e7f638f1e83ef36bcc1b418ab28e2e6c43cd43a5d0279ce5d7-rootfs.mount: Deactivated successfully. Jan 30 15:46:33.253630 containerd[1464]: time="2025-01-30T15:46:33.253255369Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:46:33.300637 containerd[1464]: time="2025-01-30T15:46:33.299859190Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff\"" Jan 30 15:46:33.302888 containerd[1464]: time="2025-01-30T15:46:33.302140451Z" level=info msg="StartContainer for \"5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff\"" Jan 30 15:46:33.360682 systemd[1]: Started cri-containerd-5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff.scope - libcontainer container 5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff. Jan 30 15:46:33.408736 systemd[1]: cri-containerd-5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff.scope: Deactivated successfully. Jan 30 15:46:33.411524 containerd[1464]: time="2025-01-30T15:46:33.411220033Z" level=info msg="StartContainer for \"5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff\" returns successfully" Jan 30 15:46:33.448943 containerd[1464]: time="2025-01-30T15:46:33.448736495Z" level=info msg="shim disconnected" id=5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff namespace=k8s.io Jan 30 15:46:33.448943 containerd[1464]: time="2025-01-30T15:46:33.448821936Z" level=warning msg="cleaning up after shim disconnected" id=5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff namespace=k8s.io Jan 30 15:46:33.448943 containerd[1464]: time="2025-01-30T15:46:33.448854808Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:46:33.481415 sshd[4603]: Accepted publickey for core from 172.24.4.1 port 48012 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:46:33.483587 sshd[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:46:33.490825 systemd-logind[1444]: New session 27 of user core. Jan 30 15:46:33.497906 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 15:46:33.510633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5134637fbd6d9973621eed156b86aae1903489740d361518cb3d475a8e92f4ff-rootfs.mount: Deactivated successfully. Jan 30 15:46:34.263158 containerd[1464]: time="2025-01-30T15:46:34.263055495Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:46:34.336243 containerd[1464]: time="2025-01-30T15:46:34.335977340Z" level=info msg="CreateContainer within sandbox \"7c71f6435aaadaee0a6eeb4210ab132ef536e3f7ec4abaafe156b211073a2d04\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"778ab502648f7d67fc339524a9ba7faf4c3fa47cd28d50e61002e1da78df6108\"" Jan 30 15:46:34.339432 containerd[1464]: time="2025-01-30T15:46:34.338271174Z" level=info msg="StartContainer for \"778ab502648f7d67fc339524a9ba7faf4c3fa47cd28d50e61002e1da78df6108\"" Jan 30 15:46:34.407716 systemd[1]: Started cri-containerd-778ab502648f7d67fc339524a9ba7faf4c3fa47cd28d50e61002e1da78df6108.scope - libcontainer container 778ab502648f7d67fc339524a9ba7faf4c3fa47cd28d50e61002e1da78df6108. Jan 30 15:46:34.445684 containerd[1464]: time="2025-01-30T15:46:34.444837798Z" level=info msg="StartContainer for \"778ab502648f7d67fc339524a9ba7faf4c3fa47cd28d50e61002e1da78df6108\" returns successfully" Jan 30 15:46:34.812590 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 15:46:34.869586 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 30 15:46:35.292920 kubelet[2633]: I0130 15:46:35.291319 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r8fr6" podStartSLOduration=5.291281826 podStartE2EDuration="5.291281826s" podCreationTimestamp="2025-01-30 15:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:46:35.290325233 +0000 UTC m=+183.936333018" watchObservedRunningTime="2025-01-30 15:46:35.291281826 +0000 UTC m=+183.937289611" Jan 30 15:46:35.617724 kubelet[2633]: I0130 15:46:35.617624 2633 setters.go:580] "Node became not ready" node="ci-4081-3-0-c-719cef3df4.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T15:46:35Z","lastTransitionTime":"2025-01-30T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 15:46:36.159204 systemd[1]: run-containerd-runc-k8s.io-778ab502648f7d67fc339524a9ba7faf4c3fa47cd28d50e61002e1da78df6108-runc.B0WXkg.mount: Deactivated successfully. Jan 30 15:46:38.011750 systemd-networkd[1374]: lxc_health: Link UP Jan 30 15:46:38.012944 systemd-networkd[1374]: lxc_health: Gained carrier Jan 30 15:46:39.070233 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 30 15:46:45.302454 sshd[4603]: pam_unix(sshd:session): session closed for user core Jan 30 15:46:45.310814 systemd[1]: sshd@24-172.24.4.139:22-172.24.4.1:48012.service: Deactivated successfully. Jan 30 15:46:45.314493 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 15:46:45.318843 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Jan 30 15:46:45.321917 systemd-logind[1444]: Removed session 27.