Jan 30 15:49:13.049076 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 15:49:13.049104 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:49:13.049115 kernel: BIOS-provided physical RAM map: Jan 30 15:49:13.049124 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 15:49:13.049131 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 15:49:13.049141 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 15:49:13.049150 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 30 15:49:13.049158 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 30 15:49:13.049166 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 15:49:13.049174 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 15:49:13.049182 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 30 15:49:13.049190 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 15:49:13.049198 kernel: NX (Execute Disable) protection: active Jan 30 15:49:13.049206 kernel: APIC: Static calls initialized Jan 30 15:49:13.049217 kernel: SMBIOS 3.0.0 present. Jan 30 15:49:13.049225 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 30 15:49:13.049234 kernel: Hypervisor detected: KVM Jan 30 15:49:13.049242 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 15:49:13.049250 kernel: kvm-clock: using sched offset of 3441317657 cycles Jan 30 15:49:13.049260 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 15:49:13.049269 kernel: tsc: Detected 1996.249 MHz processor Jan 30 15:49:13.049278 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 15:49:13.049287 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 15:49:13.049295 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 30 15:49:13.049304 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 15:49:13.049313 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 15:49:13.049321 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 30 15:49:13.049329 kernel: ACPI: Early table checksum verification disabled Jan 30 15:49:13.049339 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 30 15:49:13.049348 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:49:13.049356 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:49:13.049365 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:49:13.049373 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 30 15:49:13.049382 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:49:13.049390 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 15:49:13.049399 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 30 15:49:13.049407 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 30 15:49:13.049417 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 30 15:49:13.049426 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 30 15:49:13.049434 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 30 15:49:13.049446 kernel: No NUMA configuration found Jan 30 15:49:13.049455 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 30 15:49:13.049464 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 30 15:49:13.049474 kernel: Zone ranges: Jan 30 15:49:13.049483 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 15:49:13.049492 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 15:49:13.049501 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:49:13.049509 kernel: Movable zone start for each node Jan 30 15:49:13.049518 kernel: Early memory node ranges Jan 30 15:49:13.049527 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 15:49:13.049535 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 30 15:49:13.049545 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 30 15:49:13.049554 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 30 15:49:13.049563 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 15:49:13.049572 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 15:49:13.049581 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 30 15:49:13.049589 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 15:49:13.049598 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 15:49:13.049607 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 15:49:13.049616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 15:49:13.049626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 15:49:13.049635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 15:49:13.049644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 15:49:13.049652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 15:49:13.049661 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 15:49:13.049670 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 15:49:13.049679 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 15:49:13.049687 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 30 15:49:13.049696 kernel: Booting paravirtualized kernel on KVM Jan 30 15:49:13.049707 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 15:49:13.049716 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 15:49:13.049724 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 15:49:13.049733 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 15:49:13.049742 kernel: pcpu-alloc: [0] 0 1 Jan 30 15:49:13.049750 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 15:49:13.049760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:49:13.049770 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 15:49:13.049780 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 15:49:13.049789 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 15:49:13.049798 kernel: Fallback order for Node 0: 0 Jan 30 15:49:13.049807 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 30 15:49:13.049816 kernel: Policy zone: Normal Jan 30 15:49:13.049852 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 15:49:13.049861 kernel: software IO TLB: area num 2. Jan 30 15:49:13.049870 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 30 15:49:13.049895 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 15:49:13.049907 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 15:49:13.049915 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 15:49:13.049924 kernel: Dynamic Preempt: voluntary Jan 30 15:49:13.049933 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 15:49:13.049943 kernel: rcu: RCU event tracing is enabled. Jan 30 15:49:13.049954 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 15:49:13.049962 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 15:49:13.049970 kernel: Rude variant of Tasks RCU enabled. Jan 30 15:49:13.049979 kernel: Tracing variant of Tasks RCU enabled. Jan 30 15:49:13.049987 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 15:49:13.049997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 15:49:13.050005 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 15:49:13.050013 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 15:49:13.050022 kernel: Console: colour VGA+ 80x25 Jan 30 15:49:13.050030 kernel: printk: console [tty0] enabled Jan 30 15:49:13.050038 kernel: printk: console [ttyS0] enabled Jan 30 15:49:13.050047 kernel: ACPI: Core revision 20230628 Jan 30 15:49:13.050055 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 15:49:13.050063 kernel: x2apic enabled Jan 30 15:49:13.050073 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 15:49:13.050081 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 15:49:13.050089 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 15:49:13.050097 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 30 15:49:13.050106 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 15:49:13.050114 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 15:49:13.050122 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 15:49:13.050131 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 15:49:13.050156 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 15:49:13.050170 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 15:49:13.050179 kernel: Speculative Store Bypass: Vulnerable Jan 30 15:49:13.050187 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 30 15:49:13.050195 kernel: Freeing SMP alternatives memory: 32K Jan 30 15:49:13.050209 kernel: pid_max: default: 32768 minimum: 301 Jan 30 15:49:13.050220 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 15:49:13.050230 kernel: landlock: Up and running. Jan 30 15:49:13.050239 kernel: SELinux: Initializing. Jan 30 15:49:13.050248 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:49:13.050258 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 15:49:13.050267 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 30 15:49:13.050279 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:49:13.050288 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:49:13.050298 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 15:49:13.050307 kernel: Performance Events: AMD PMU driver. Jan 30 15:49:13.050316 kernel: ... version: 0 Jan 30 15:49:13.050327 kernel: ... bit width: 48 Jan 30 15:49:13.050336 kernel: ... generic registers: 4 Jan 30 15:49:13.050346 kernel: ... value mask: 0000ffffffffffff Jan 30 15:49:13.050355 kernel: ... max period: 00007fffffffffff Jan 30 15:49:13.050364 kernel: ... fixed-purpose events: 0 Jan 30 15:49:13.050374 kernel: ... event mask: 000000000000000f Jan 30 15:49:13.050383 kernel: signal: max sigframe size: 1440 Jan 30 15:49:13.050392 kernel: rcu: Hierarchical SRCU implementation. Jan 30 15:49:13.050401 kernel: rcu: Max phase no-delay instances is 400. Jan 30 15:49:13.050412 kernel: smp: Bringing up secondary CPUs ... Jan 30 15:49:13.050422 kernel: smpboot: x86: Booting SMP configuration: Jan 30 15:49:13.050431 kernel: .... node #0, CPUs: #1 Jan 30 15:49:13.050440 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 15:49:13.050449 kernel: smpboot: Max logical packages: 2 Jan 30 15:49:13.050459 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 30 15:49:13.050468 kernel: devtmpfs: initialized Jan 30 15:49:13.050477 kernel: x86/mm: Memory block size: 128MB Jan 30 15:49:13.050487 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 15:49:13.050496 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 15:49:13.050507 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 15:49:13.050517 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 15:49:13.050526 kernel: audit: initializing netlink subsys (disabled) Jan 30 15:49:13.050535 kernel: audit: type=2000 audit(1738252152.306:1): state=initialized audit_enabled=0 res=1 Jan 30 15:49:13.050544 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 15:49:13.050554 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 15:49:13.050563 kernel: cpuidle: using governor menu Jan 30 15:49:13.050572 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 15:49:13.050582 kernel: dca service started, version 1.12.1 Jan 30 15:49:13.050592 kernel: PCI: Using configuration type 1 for base access Jan 30 15:49:13.050602 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 15:49:13.050611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 15:49:13.050621 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 15:49:13.050630 kernel: ACPI: Added _OSI(Module Device) Jan 30 15:49:13.050639 kernel: ACPI: Added _OSI(Processor Device) Jan 30 15:49:13.050649 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 15:49:13.050658 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 15:49:13.050667 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 15:49:13.050678 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 15:49:13.050688 kernel: ACPI: Interpreter enabled Jan 30 15:49:13.050697 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 15:49:13.050706 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 15:49:13.050715 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 15:49:13.050725 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 15:49:13.050734 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 15:49:13.050743 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 15:49:13.050946 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 15:49:13.051067 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 15:49:13.051165 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 15:49:13.051180 kernel: acpiphp: Slot [3] registered Jan 30 15:49:13.051190 kernel: acpiphp: Slot [4] registered Jan 30 15:49:13.051199 kernel: acpiphp: Slot [5] registered Jan 30 15:49:13.051208 kernel: acpiphp: Slot [6] registered Jan 30 15:49:13.051217 kernel: acpiphp: Slot [7] registered Jan 30 15:49:13.051230 kernel: acpiphp: Slot [8] registered Jan 30 15:49:13.051240 kernel: acpiphp: Slot [9] registered Jan 30 15:49:13.051249 kernel: acpiphp: Slot [10] registered Jan 30 15:49:13.051258 kernel: acpiphp: Slot [11] registered Jan 30 15:49:13.051267 kernel: acpiphp: Slot [12] registered Jan 30 15:49:13.051276 kernel: acpiphp: Slot [13] registered Jan 30 15:49:13.051286 kernel: acpiphp: Slot [14] registered Jan 30 15:49:13.051295 kernel: acpiphp: Slot [15] registered Jan 30 15:49:13.051304 kernel: acpiphp: Slot [16] registered Jan 30 15:49:13.051315 kernel: acpiphp: Slot [17] registered Jan 30 15:49:13.051324 kernel: acpiphp: Slot [18] registered Jan 30 15:49:13.051333 kernel: acpiphp: Slot [19] registered Jan 30 15:49:13.051343 kernel: acpiphp: Slot [20] registered Jan 30 15:49:13.051352 kernel: acpiphp: Slot [21] registered Jan 30 15:49:13.051361 kernel: acpiphp: Slot [22] registered Jan 30 15:49:13.051371 kernel: acpiphp: Slot [23] registered Jan 30 15:49:13.051380 kernel: acpiphp: Slot [24] registered Jan 30 15:49:13.051389 kernel: acpiphp: Slot [25] registered Jan 30 15:49:13.051398 kernel: acpiphp: Slot [26] registered Jan 30 15:49:13.051410 kernel: acpiphp: Slot [27] registered Jan 30 15:49:13.051419 kernel: acpiphp: Slot [28] registered Jan 30 15:49:13.051428 kernel: acpiphp: Slot [29] registered Jan 30 15:49:13.051437 kernel: acpiphp: Slot [30] registered Jan 30 15:49:13.051446 kernel: acpiphp: Slot [31] registered Jan 30 15:49:13.051455 kernel: PCI host bridge to bus 0000:00 Jan 30 15:49:13.051553 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 15:49:13.051643 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 15:49:13.051735 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 15:49:13.051840 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 15:49:13.051935 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 30 15:49:13.052021 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 15:49:13.052125 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 15:49:13.052224 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 15:49:13.052322 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 15:49:13.052421 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 30 15:49:13.052512 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 15:49:13.052608 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 15:49:13.052699 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 15:49:13.052790 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 15:49:13.053016 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 15:49:13.053119 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 15:49:13.053218 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 15:49:13.053324 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 15:49:13.053426 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 15:49:13.053528 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 30 15:49:13.053626 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 30 15:49:13.053723 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 30 15:49:13.053849 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 15:49:13.053960 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 15:49:13.054060 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 30 15:49:13.054179 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 30 15:49:13.054281 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 30 15:49:13.054377 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 30 15:49:13.054481 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 15:49:13.054587 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 15:49:13.054684 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 30 15:49:13.054784 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 30 15:49:13.054939 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 15:49:13.055041 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 30 15:49:13.055138 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 30 15:49:13.055241 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 15:49:13.055346 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 30 15:49:13.055444 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 30 15:49:13.055543 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 30 15:49:13.055557 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 15:49:13.055567 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 15:49:13.055576 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 15:49:13.055586 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 15:49:13.055595 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 15:49:13.055608 kernel: iommu: Default domain type: Translated Jan 30 15:49:13.055618 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 15:49:13.055627 kernel: PCI: Using ACPI for IRQ routing Jan 30 15:49:13.055637 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 15:49:13.055646 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 15:49:13.055655 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 30 15:49:13.055753 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 15:49:13.055906 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 15:49:13.056008 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 15:49:13.056022 kernel: vgaarb: loaded Jan 30 15:49:13.056031 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 15:49:13.056040 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 15:49:13.056049 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 15:49:13.056058 kernel: pnp: PnP ACPI init Jan 30 15:49:13.056146 kernel: pnp 00:03: [dma 2] Jan 30 15:49:13.056160 kernel: pnp: PnP ACPI: found 5 devices Jan 30 15:49:13.056170 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 15:49:13.056182 kernel: NET: Registered PF_INET protocol family Jan 30 15:49:13.056191 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 15:49:13.056200 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 15:49:13.056209 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 15:49:13.056218 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 15:49:13.056227 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 15:49:13.056235 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 15:49:13.056244 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:49:13.056253 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 15:49:13.056264 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 15:49:13.056273 kernel: NET: Registered PF_XDP protocol family Jan 30 15:49:13.056353 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 15:49:13.056431 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 15:49:13.056508 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 15:49:13.056588 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 30 15:49:13.056667 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 30 15:49:13.056759 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 15:49:13.057911 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 15:49:13.057932 kernel: PCI: CLS 0 bytes, default 64 Jan 30 15:49:13.057942 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 15:49:13.057952 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 30 15:49:13.057964 kernel: Initialise system trusted keyrings Jan 30 15:49:13.057973 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 15:49:13.057981 kernel: Key type asymmetric registered Jan 30 15:49:13.057990 kernel: Asymmetric key parser 'x509' registered Jan 30 15:49:13.058003 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 15:49:13.058012 kernel: io scheduler mq-deadline registered Jan 30 15:49:13.058021 kernel: io scheduler kyber registered Jan 30 15:49:13.058029 kernel: io scheduler bfq registered Jan 30 15:49:13.058038 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 15:49:13.058048 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 15:49:13.058057 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 15:49:13.058066 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 15:49:13.058075 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 15:49:13.058086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 15:49:13.058095 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 15:49:13.058104 kernel: random: crng init done Jan 30 15:49:13.058112 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 15:49:13.058121 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 15:49:13.058130 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 15:49:13.058253 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 15:49:13.058270 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 15:49:13.058353 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 15:49:13.058445 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T15:49:12 UTC (1738252152) Jan 30 15:49:13.058531 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 15:49:13.058545 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 15:49:13.058555 kernel: NET: Registered PF_INET6 protocol family Jan 30 15:49:13.058564 kernel: Segment Routing with IPv6 Jan 30 15:49:13.058574 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 15:49:13.058584 kernel: NET: Registered PF_PACKET protocol family Jan 30 15:49:13.058593 kernel: Key type dns_resolver registered Jan 30 15:49:13.058606 kernel: IPI shorthand broadcast: enabled Jan 30 15:49:13.058615 kernel: sched_clock: Marking stable (951007028, 179118614)->(1158123190, -27997548) Jan 30 15:49:13.058625 kernel: registered taskstats version 1 Jan 30 15:49:13.058634 kernel: Loading compiled-in X.509 certificates Jan 30 15:49:13.058644 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 15:49:13.058653 kernel: Key type .fscrypt registered Jan 30 15:49:13.058662 kernel: Key type fscrypt-provisioning registered Jan 30 15:49:13.058672 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 15:49:13.058682 kernel: ima: Allocated hash algorithm: sha1 Jan 30 15:49:13.058693 kernel: ima: No architecture policies found Jan 30 15:49:13.058702 kernel: clk: Disabling unused clocks Jan 30 15:49:13.058711 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 15:49:13.058721 kernel: Write protecting the kernel read-only data: 36864k Jan 30 15:49:13.058731 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 15:49:13.058740 kernel: Run /init as init process Jan 30 15:49:13.058749 kernel: with arguments: Jan 30 15:49:13.058758 kernel: /init Jan 30 15:49:13.058767 kernel: with environment: Jan 30 15:49:13.058778 kernel: HOME=/ Jan 30 15:49:13.058787 kernel: TERM=linux Jan 30 15:49:13.058796 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 15:49:13.058810 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:49:13.058846 systemd[1]: Detected virtualization kvm. Jan 30 15:49:13.058857 systemd[1]: Detected architecture x86-64. Jan 30 15:49:13.058867 systemd[1]: Running in initrd. Jan 30 15:49:13.058880 systemd[1]: No hostname configured, using default hostname. Jan 30 15:49:13.058889 systemd[1]: Hostname set to . Jan 30 15:49:13.058900 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:49:13.058910 systemd[1]: Queued start job for default target initrd.target. Jan 30 15:49:13.058920 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:49:13.058930 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:49:13.058942 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 15:49:13.058960 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:49:13.058973 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 15:49:13.058984 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 15:49:13.058996 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 15:49:13.059007 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 15:49:13.059018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:49:13.059031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:49:13.059042 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:49:13.059052 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:49:13.059063 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:49:13.059073 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:49:13.059084 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:49:13.059094 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:49:13.059105 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:49:13.059117 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:49:13.059128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:49:13.059138 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:49:13.059149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:49:13.059159 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:49:13.059170 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 15:49:13.059180 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:49:13.059191 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 15:49:13.059201 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 15:49:13.059214 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:49:13.059225 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:49:13.059257 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 15:49:13.059281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:49:13.059295 systemd-journald[183]: Journal started Jan 30 15:49:13.059319 systemd-journald[183]: Runtime Journal (/run/log/journal/d8c3bd0a05604ec0bb678ebfaa69f4ea) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:49:13.077022 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:49:13.079246 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 15:49:13.079582 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 15:49:13.083087 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:49:13.085898 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 15:49:13.109029 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:49:13.153666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 15:49:13.153691 kernel: Bridge firewalling registered Jan 30 15:49:13.122594 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 15:49:13.158990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:49:13.159783 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:49:13.162435 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:49:13.164059 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:49:13.172002 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:49:13.175061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:49:13.178295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:49:13.180122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:49:13.193984 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:49:13.194880 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:49:13.195640 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:49:13.200958 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 15:49:13.209439 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:49:13.212863 dracut-cmdline[218]: dracut-dracut-053 Jan 30 15:49:13.213770 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 15:49:13.250082 systemd-resolved[219]: Positive Trust Anchors: Jan 30 15:49:13.250788 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:49:13.251727 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:49:13.257014 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 30 15:49:13.257882 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:49:13.258437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:49:13.299922 kernel: SCSI subsystem initialized Jan 30 15:49:13.310886 kernel: Loading iSCSI transport class v2.0-870. Jan 30 15:49:13.322879 kernel: iscsi: registered transport (tcp) Jan 30 15:49:13.346133 kernel: iscsi: registered transport (qla4xxx) Jan 30 15:49:13.346221 kernel: QLogic iSCSI HBA Driver Jan 30 15:49:13.407617 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 15:49:13.413153 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 15:49:13.451355 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 15:49:13.451406 kernel: device-mapper: uevent: version 1.0.3 Jan 30 15:49:13.453434 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 15:49:13.513948 kernel: raid6: sse2x4 gen() 3626 MB/s Jan 30 15:49:13.531927 kernel: raid6: sse2x2 gen() 10340 MB/s Jan 30 15:49:13.550233 kernel: raid6: sse2x1 gen() 10185 MB/s Jan 30 15:49:13.550294 kernel: raid6: using algorithm sse2x2 gen() 10340 MB/s Jan 30 15:49:13.569254 kernel: raid6: .... xor() 9418 MB/s, rmw enabled Jan 30 15:49:13.569325 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 15:49:13.592207 kernel: xor: measuring software checksum speed Jan 30 15:49:13.592280 kernel: prefetch64-sse : 18519 MB/sec Jan 30 15:49:13.594166 kernel: generic_sse : 16826 MB/sec Jan 30 15:49:13.594212 kernel: xor: using function: prefetch64-sse (18519 MB/sec) Jan 30 15:49:13.789263 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 15:49:13.806198 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:49:13.815009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:49:13.847010 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 15:49:13.857871 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:49:13.872792 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 15:49:13.906443 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 30 15:49:13.958782 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:49:13.968222 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:49:14.054254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:49:14.065093 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 15:49:14.100339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 15:49:14.104964 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:49:14.107802 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:49:14.109140 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:49:14.115981 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 15:49:14.141221 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:49:14.159868 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 30 15:49:14.203398 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 30 15:49:14.203519 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 15:49:14.203533 kernel: GPT:17805311 != 20971519 Jan 30 15:49:14.203545 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 15:49:14.203556 kernel: GPT:17805311 != 20971519 Jan 30 15:49:14.203567 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 15:49:14.203578 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:49:14.203588 kernel: libata version 3.00 loaded. Jan 30 15:49:14.188332 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:49:14.188463 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:49:14.208168 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 15:49:14.218197 kernel: scsi host0: ata_piix Jan 30 15:49:14.218345 kernel: scsi host1: ata_piix Jan 30 15:49:14.218478 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 30 15:49:14.218493 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 30 15:49:14.189140 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:49:14.283687 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (462) Jan 30 15:49:14.283707 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (465) Jan 30 15:49:14.189646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:49:14.189767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:49:14.191184 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:49:14.199115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:49:14.244780 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 15:49:14.284790 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 15:49:14.286897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:49:14.313927 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 15:49:14.324566 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 15:49:14.334968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:49:14.342179 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 15:49:14.349113 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 15:49:14.357284 disk-uuid[503]: Primary Header is updated. Jan 30 15:49:14.357284 disk-uuid[503]: Secondary Entries is updated. Jan 30 15:49:14.357284 disk-uuid[503]: Secondary Header is updated. Jan 30 15:49:14.366890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:49:14.398202 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:49:15.388985 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 15:49:15.389783 disk-uuid[505]: The operation has completed successfully. Jan 30 15:49:15.459761 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 15:49:15.460085 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 15:49:15.490957 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 15:49:15.509277 sh[528]: Success Jan 30 15:49:15.539882 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 30 15:49:15.621636 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 15:49:15.637072 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 15:49:15.640415 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 15:49:15.692889 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 15:49:15.692985 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:49:15.693017 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 15:49:15.698947 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 15:49:15.701900 kernel: BTRFS info (device dm-0): using free space tree Jan 30 15:49:15.726767 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 15:49:15.728515 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 15:49:15.735212 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 15:49:15.739132 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 15:49:15.772929 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:49:15.777428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:49:15.777489 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:49:15.786890 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:49:15.804286 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 15:49:15.810984 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:49:15.830323 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 15:49:15.835978 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 15:49:15.867884 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:49:15.877066 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:49:15.897167 systemd-networkd[710]: lo: Link UP Jan 30 15:49:15.897181 systemd-networkd[710]: lo: Gained carrier Jan 30 15:49:15.898381 systemd-networkd[710]: Enumeration completed Jan 30 15:49:15.899039 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:49:15.899042 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:49:15.899404 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:49:15.900591 systemd-networkd[710]: eth0: Link UP Jan 30 15:49:15.900595 systemd-networkd[710]: eth0: Gained carrier Jan 30 15:49:15.900602 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:49:15.900975 systemd[1]: Reached target network.target - Network. Jan 30 15:49:15.914391 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.96/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:49:15.988235 ignition[669]: Ignition 2.19.0 Jan 30 15:49:15.988247 ignition[669]: Stage: fetch-offline Jan 30 15:49:15.988290 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:49:15.988301 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:49:15.988408 ignition[669]: parsed url from cmdline: "" Jan 30 15:49:15.991233 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:49:15.988412 ignition[669]: no config URL provided Jan 30 15:49:15.988417 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:49:15.988426 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:49:15.988431 ignition[669]: failed to fetch config: resource requires networking Jan 30 15:49:15.989442 ignition[669]: Ignition finished successfully Jan 30 15:49:16.000033 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 15:49:16.014426 ignition[720]: Ignition 2.19.0 Jan 30 15:49:16.014439 ignition[720]: Stage: fetch Jan 30 15:49:16.014618 ignition[720]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:49:16.014630 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:49:16.014717 ignition[720]: parsed url from cmdline: "" Jan 30 15:49:16.014720 ignition[720]: no config URL provided Jan 30 15:49:16.014725 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 15:49:16.014733 ignition[720]: no config at "/usr/lib/ignition/user.ign" Jan 30 15:49:16.014873 ignition[720]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 15:49:16.015023 ignition[720]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 15:49:16.015057 ignition[720]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 15:49:16.200771 ignition[720]: GET result: OK Jan 30 15:49:16.200995 ignition[720]: parsing config with SHA512: c0d44757fe9935834959270f1e9da84de3916d32e976795cef5281e9a304a9715fca5fbb500a6b06869ea9a1e21d2f9253d71389deea8974c1bf15be804ad008 Jan 30 15:49:16.211421 unknown[720]: fetched base config from "system" Jan 30 15:49:16.213641 ignition[720]: fetch: fetch complete Jan 30 15:49:16.211466 unknown[720]: fetched base config from "system" Jan 30 15:49:16.213655 ignition[720]: fetch: fetch passed Jan 30 15:49:16.211482 unknown[720]: fetched user config from "openstack" Jan 30 15:49:16.213755 ignition[720]: Ignition finished successfully Jan 30 15:49:16.218064 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 15:49:16.234222 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 15:49:16.264402 ignition[726]: Ignition 2.19.0 Jan 30 15:49:16.264432 ignition[726]: Stage: kargs Jan 30 15:49:16.264893 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:49:16.264923 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:49:16.267441 ignition[726]: kargs: kargs passed Jan 30 15:49:16.269895 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 15:49:16.267545 ignition[726]: Ignition finished successfully Jan 30 15:49:16.285697 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 15:49:16.315722 ignition[732]: Ignition 2.19.0 Jan 30 15:49:16.315746 ignition[732]: Stage: disks Jan 30 15:49:16.316197 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jan 30 15:49:16.316223 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:49:16.319298 ignition[732]: disks: disks passed Jan 30 15:49:16.321872 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 15:49:16.319400 ignition[732]: Ignition finished successfully Jan 30 15:49:16.324311 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 15:49:16.326400 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:49:16.329131 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:49:16.331645 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:49:16.334640 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:49:16.347217 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 15:49:16.382187 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 15:49:16.398757 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 15:49:16.406072 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 15:49:16.570865 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 15:49:16.571923 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 15:49:16.573042 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 15:49:16.580067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:49:16.584078 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 15:49:16.587267 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 15:49:16.590519 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 15:49:16.609278 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (748) Jan 30 15:49:16.609332 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:49:16.609375 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:49:16.609405 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:49:16.609432 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:49:16.593942 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 15:49:16.593970 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:49:16.611172 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 15:49:16.615793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:49:16.626972 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 15:49:16.809918 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 15:49:16.824898 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jan 30 15:49:16.833719 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 15:49:16.842500 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 15:49:17.014247 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 15:49:17.025022 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 15:49:17.030123 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 15:49:17.050376 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 15:49:17.054427 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:49:17.105902 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 15:49:17.110737 ignition[864]: INFO : Ignition 2.19.0 Jan 30 15:49:17.110737 ignition[864]: INFO : Stage: mount Jan 30 15:49:17.112940 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:49:17.112940 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:49:17.112940 ignition[864]: INFO : mount: mount passed Jan 30 15:49:17.112940 ignition[864]: INFO : Ignition finished successfully Jan 30 15:49:17.113228 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 15:49:17.945496 systemd-networkd[710]: eth0: Gained IPv6LL Jan 30 15:49:23.899605 coreos-metadata[750]: Jan 30 15:49:23.899 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:49:23.940271 coreos-metadata[750]: Jan 30 15:49:23.940 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:49:23.955953 coreos-metadata[750]: Jan 30 15:49:23.955 INFO Fetch successful Jan 30 15:49:23.957577 coreos-metadata[750]: Jan 30 15:49:23.956 INFO wrote hostname ci-4081-3-0-c-6e27ecb2ae.novalocal to /sysroot/etc/hostname Jan 30 15:49:23.959416 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 15:49:23.959630 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 15:49:23.971157 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 15:49:24.000196 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 15:49:24.017963 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) Jan 30 15:49:24.027141 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 15:49:24.027225 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 15:49:24.031351 kernel: BTRFS info (device vda6): using free space tree Jan 30 15:49:24.041865 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 15:49:24.046819 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 15:49:24.078770 ignition[901]: INFO : Ignition 2.19.0 Jan 30 15:49:24.078770 ignition[901]: INFO : Stage: files Jan 30 15:49:24.080075 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:49:24.080075 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:49:24.082010 ignition[901]: DEBUG : files: compiled without relabeling support, skipping Jan 30 15:49:24.084065 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 15:49:24.084065 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 15:49:24.090981 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 15:49:24.091855 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 15:49:24.092576 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 15:49:24.092462 unknown[901]: wrote ssh authorized keys file for user: core Jan 30 15:49:24.095980 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 15:49:24.096932 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 15:49:24.096932 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:49:24.096932 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 15:49:24.168804 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 15:49:24.460087 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 15:49:24.460087 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 15:49:24.460087 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 15:49:25.012985 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 30 15:49:25.437342 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 15:49:25.437342 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:49:25.443028 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 15:49:25.945616 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 30 15:49:27.533880 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 15:49:27.533880 ignition[901]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 15:49:27.539086 ignition[901]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 15:49:27.539086 ignition[901]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:49:27.539086 ignition[901]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 15:49:27.539086 ignition[901]: INFO : files: files passed Jan 30 15:49:27.539086 ignition[901]: INFO : Ignition finished successfully Jan 30 15:49:27.539020 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 15:49:27.551147 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 15:49:27.560038 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 15:49:27.565545 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 15:49:27.566476 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 15:49:27.582391 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:49:27.584883 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:49:27.584883 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 15:49:27.586630 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:49:27.588651 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 15:49:27.595064 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 15:49:27.619552 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 15:49:27.619693 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 15:49:27.620534 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 15:49:27.621944 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 15:49:27.624037 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 15:49:27.630972 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 15:49:27.646321 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:49:27.654181 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 15:49:27.665621 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 15:49:27.665755 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 15:49:27.668975 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:49:27.670045 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:49:27.672334 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 15:49:27.674506 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 15:49:27.674558 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 15:49:27.677178 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 15:49:27.678268 systemd[1]: Stopped target basic.target - Basic System. Jan 30 15:49:27.680475 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 15:49:27.682348 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 15:49:27.684217 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 15:49:27.686400 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 15:49:27.688633 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 15:49:27.690897 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 15:49:27.693003 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 15:49:27.695205 systemd[1]: Stopped target swap.target - Swaps. Jan 30 15:49:27.697253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 15:49:27.697303 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 15:49:27.699773 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:49:27.700878 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:49:27.701395 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 15:49:27.701988 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:49:27.703360 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 15:49:27.703407 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 15:49:27.706718 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 15:49:27.706764 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 15:49:27.707896 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 15:49:27.707936 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 15:49:27.719929 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 15:49:27.722918 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 15:49:27.725447 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 15:49:27.725502 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:49:27.727464 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 15:49:27.727508 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 15:49:27.733330 ignition[955]: INFO : Ignition 2.19.0 Jan 30 15:49:27.733330 ignition[955]: INFO : Stage: umount Jan 30 15:49:27.738606 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 15:49:27.738606 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 15:49:27.738606 ignition[955]: INFO : umount: umount passed Jan 30 15:49:27.738606 ignition[955]: INFO : Ignition finished successfully Jan 30 15:49:27.736291 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 15:49:27.736388 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 15:49:27.738049 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 15:49:27.738090 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 15:49:27.739953 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 15:49:27.739993 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 15:49:27.740556 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 15:49:27.740595 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 15:49:27.742006 systemd[1]: Stopped target network.target - Network. Jan 30 15:49:27.743008 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 15:49:27.743079 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 15:49:27.743611 systemd[1]: Stopped target paths.target - Path Units. Jan 30 15:49:27.744075 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 15:49:27.746062 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:49:27.748139 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 15:49:27.749157 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 15:49:27.750214 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 15:49:27.750254 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 15:49:27.753162 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 15:49:27.753197 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 15:49:27.754190 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 15:49:27.754233 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 15:49:27.755289 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 15:49:27.755331 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 15:49:27.758684 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 15:49:27.760226 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 15:49:27.764242 systemd-networkd[710]: eth0: DHCPv6 lease lost Jan 30 15:49:27.764679 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 15:49:27.765259 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 15:49:27.765352 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 15:49:27.767437 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 15:49:27.767537 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 15:49:27.769950 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 15:49:27.770059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 15:49:27.772214 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 15:49:27.772620 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:49:27.773503 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 15:49:27.773549 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 15:49:27.784964 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 15:49:27.785800 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 15:49:27.785874 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 15:49:27.786441 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:49:27.786483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:49:27.787052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 15:49:27.787092 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 15:49:27.788090 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 15:49:27.788130 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:49:27.789380 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:49:27.798310 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 15:49:27.798430 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 15:49:27.799779 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 15:49:27.799938 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:49:27.801196 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 15:49:27.801251 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 15:49:27.802433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 15:49:27.802465 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:49:27.803575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 15:49:27.803616 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 15:49:27.805270 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 15:49:27.805311 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 15:49:27.806466 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 15:49:27.806508 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 15:49:27.815225 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 15:49:27.815798 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 15:49:27.815871 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:49:27.816431 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 15:49:27.816473 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:49:27.817082 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 15:49:27.817122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:49:27.818305 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:49:27.818345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:49:27.823293 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 15:49:27.823391 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 15:49:27.824503 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 15:49:27.831218 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 15:49:27.840115 systemd[1]: Switching root. Jan 30 15:49:27.873707 systemd-journald[183]: Journal stopped Jan 30 15:49:29.805977 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 15:49:29.806034 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 15:49:29.806052 kernel: SELinux: policy capability open_perms=1 Jan 30 15:49:29.806063 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 15:49:29.806077 kernel: SELinux: policy capability always_check_network=0 Jan 30 15:49:29.806102 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 15:49:29.806118 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 15:49:29.806134 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 15:49:29.806145 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 15:49:29.806156 kernel: audit: type=1403 audit(1738252168.832:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 15:49:29.806168 systemd[1]: Successfully loaded SELinux policy in 78.179ms. Jan 30 15:49:29.806185 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.770ms. Jan 30 15:49:29.806199 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 15:49:29.806213 systemd[1]: Detected virtualization kvm. Jan 30 15:49:29.806225 systemd[1]: Detected architecture x86-64. Jan 30 15:49:29.806238 systemd[1]: Detected first boot. Jan 30 15:49:29.806250 systemd[1]: Hostname set to . Jan 30 15:49:29.806263 systemd[1]: Initializing machine ID from VM UUID. Jan 30 15:49:29.806275 zram_generator::config[1014]: No configuration found. Jan 30 15:49:29.806293 systemd[1]: Populated /etc with preset unit settings. Jan 30 15:49:29.806305 systemd[1]: Queued start job for default target multi-user.target. Jan 30 15:49:29.806321 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 15:49:29.806336 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 15:49:29.806348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 15:49:29.806361 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 15:49:29.806373 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 15:49:29.806386 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 15:49:29.806398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 15:49:29.806411 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 15:49:29.806426 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 15:49:29.806438 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 15:49:29.806451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 15:49:29.806464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 15:49:29.806476 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 15:49:29.806488 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 15:49:29.806502 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 15:49:29.806515 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 15:49:29.806527 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 15:49:29.806544 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 15:49:29.806556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 15:49:29.806569 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 15:49:29.806581 systemd[1]: Reached target slices.target - Slice Units. Jan 30 15:49:29.806594 systemd[1]: Reached target swap.target - Swaps. Jan 30 15:49:29.806606 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 15:49:29.806619 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 15:49:29.806633 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 15:49:29.806646 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 15:49:29.806658 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 15:49:29.806670 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 15:49:29.806682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 15:49:29.806695 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 15:49:29.806708 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 15:49:29.806722 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 15:49:29.806733 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 15:49:29.806747 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:49:29.806758 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 15:49:29.806770 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 15:49:29.806782 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 15:49:29.806793 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 15:49:29.806807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:49:29.806818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 15:49:29.806915 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 15:49:29.806932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:49:29.806944 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:49:29.806955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:49:29.806967 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 15:49:29.806979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:49:29.806991 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 15:49:29.807004 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 15:49:29.807016 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 15:49:29.807028 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 15:49:29.807041 kernel: fuse: init (API version 7.39) Jan 30 15:49:29.807052 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 15:49:29.807064 kernel: loop: module loaded Jan 30 15:49:29.807075 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 15:49:29.807087 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 15:49:29.807098 kernel: ACPI: bus type drm_connector registered Jan 30 15:49:29.807109 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 15:49:29.807121 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:49:29.807150 systemd-journald[1132]: Collecting audit messages is disabled. Jan 30 15:49:29.807176 systemd-journald[1132]: Journal started Jan 30 15:49:29.807200 systemd-journald[1132]: Runtime Journal (/run/log/journal/d8c3bd0a05604ec0bb678ebfaa69f4ea) is 8.0M, max 78.3M, 70.3M free. Jan 30 15:49:29.809869 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 15:49:29.811902 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 15:49:29.812978 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 15:49:29.813559 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 15:49:29.814202 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 15:49:29.814809 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 15:49:29.815460 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 15:49:29.816221 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 15:49:29.817025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 15:49:29.817752 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 15:49:29.817925 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 15:49:29.818687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:49:29.818893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:49:29.819599 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:49:29.819736 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:49:29.820662 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:49:29.820802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:49:29.821552 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 15:49:29.821690 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 15:49:29.822509 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:49:29.824978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:49:29.826778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 15:49:29.827670 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 15:49:29.828465 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 15:49:29.838160 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 15:49:29.843994 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 15:49:29.846469 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 15:49:29.848911 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 15:49:29.856048 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 15:49:29.859668 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 15:49:29.864315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:49:29.871740 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 15:49:29.873914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:49:29.875213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:49:29.884518 systemd-journald[1132]: Time spent on flushing to /var/log/journal/d8c3bd0a05604ec0bb678ebfaa69f4ea is 36.977ms for 934 entries. Jan 30 15:49:29.884518 systemd-journald[1132]: System Journal (/var/log/journal/d8c3bd0a05604ec0bb678ebfaa69f4ea) is 8.0M, max 584.8M, 576.8M free. Jan 30 15:49:29.940056 systemd-journald[1132]: Received client request to flush runtime journal. Jan 30 15:49:29.886799 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 15:49:29.892794 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 15:49:29.895212 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 15:49:29.902228 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 15:49:29.910000 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 15:49:29.919021 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 15:49:29.919686 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 15:49:29.934002 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 15:49:29.942626 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 15:49:29.948908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:49:29.952608 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 30 15:49:29.952628 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 30 15:49:29.959194 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 15:49:29.970987 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 15:49:30.000563 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 15:49:30.008056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 15:49:30.020676 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 30 15:49:30.020697 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 30 15:49:30.024887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 15:49:30.568378 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 15:49:30.580165 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 15:49:30.602639 systemd-udevd[1198]: Using default interface naming scheme 'v255'. Jan 30 15:49:30.634224 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 15:49:30.652251 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 15:49:30.703132 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 15:49:30.712584 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1217) Jan 30 15:49:30.762288 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 15:49:30.797878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 15:49:30.812863 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 15:49:30.869645 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 15:49:30.871844 kernel: ACPI: button: Power Button [PWRF] Jan 30 15:49:30.877354 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 15:49:30.874901 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 15:49:30.889178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:49:30.896144 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 15:49:30.919415 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 15:49:30.919477 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 15:49:30.924860 kernel: Console: switching to colour dummy device 80x25 Jan 30 15:49:30.927283 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 15:49:30.927332 kernel: [drm] features: -context_init Jan 30 15:49:30.930852 kernel: [drm] number of scanouts: 1 Jan 30 15:49:30.930910 kernel: [drm] number of cap sets: 0 Jan 30 15:49:30.935241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:49:30.935512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:49:30.940864 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 15:49:30.945107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:49:30.953190 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 15:49:30.953253 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 15:49:30.967600 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 15:49:30.971366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 15:49:30.971597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:49:30.977068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 15:49:30.982196 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 15:49:30.989197 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 15:49:30.990413 systemd-networkd[1214]: lo: Link UP Jan 30 15:49:30.990424 systemd-networkd[1214]: lo: Gained carrier Jan 30 15:49:30.993638 systemd-networkd[1214]: Enumeration completed Jan 30 15:49:30.993752 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 15:49:30.994049 systemd-networkd[1214]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:49:30.994058 systemd-networkd[1214]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 15:49:30.998524 systemd-networkd[1214]: eth0: Link UP Jan 30 15:49:30.998533 systemd-networkd[1214]: eth0: Gained carrier Jan 30 15:49:30.998547 systemd-networkd[1214]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 15:49:31.000788 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 15:49:31.008874 systemd-networkd[1214]: eth0: DHCPv4 address 172.24.4.96/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 30 15:49:31.020518 lvm[1248]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:49:31.055852 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 15:49:31.058378 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 15:49:31.061964 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 15:49:31.068648 lvm[1253]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 15:49:31.088719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 15:49:31.091932 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 15:49:31.092617 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 15:49:31.092731 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 15:49:31.092752 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 15:49:31.092854 systemd[1]: Reached target machines.target - Containers. Jan 30 15:49:31.094667 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 15:49:31.102983 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 15:49:31.105128 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 15:49:31.106441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:49:31.107502 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 15:49:31.118151 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 15:49:31.130111 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 15:49:31.134939 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 15:49:31.141174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 15:49:31.186963 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 15:49:31.214488 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 15:49:31.216643 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 15:49:31.259170 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 15:49:31.291997 kernel: loop1: detected capacity change from 0 to 8 Jan 30 15:49:31.321909 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 15:49:31.417113 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 15:49:31.494915 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 15:49:31.553876 kernel: loop5: detected capacity change from 0 to 8 Jan 30 15:49:31.560018 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 15:49:31.591920 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 15:49:31.637063 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 15:49:31.638177 (sd-merge)[1279]: Merged extensions into '/usr'. Jan 30 15:49:31.645608 systemd[1]: Reloading requested from client PID 1265 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 15:49:31.645628 systemd[1]: Reloading... Jan 30 15:49:31.737492 zram_generator::config[1306]: No configuration found. Jan 30 15:49:31.920123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:49:31.986239 systemd[1]: Reloading finished in 340 ms. Jan 30 15:49:32.000272 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 15:49:32.012053 systemd[1]: Starting ensure-sysext.service... Jan 30 15:49:32.020011 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 15:49:32.025733 systemd[1]: Reloading requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Jan 30 15:49:32.025940 systemd[1]: Reloading... Jan 30 15:49:32.054265 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 15:49:32.054619 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 15:49:32.055474 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 15:49:32.055778 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 30 15:49:32.056873 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 30 15:49:32.061110 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:49:32.061123 systemd-tmpfiles[1369]: Skipping /boot Jan 30 15:49:32.070677 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 15:49:32.075563 ldconfig[1261]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 15:49:32.070693 systemd-tmpfiles[1369]: Skipping /boot Jan 30 15:49:32.107613 zram_generator::config[1399]: No configuration found. Jan 30 15:49:32.256514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:49:32.322457 systemd[1]: Reloading finished in 295 ms. Jan 30 15:49:32.341617 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 15:49:32.353438 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 15:49:32.368985 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:49:32.377089 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 15:49:32.394375 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 15:49:32.408079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 15:49:32.420976 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 15:49:32.431694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:49:32.432145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:49:32.434106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:49:32.441076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:49:32.450525 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:49:32.453398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:49:32.453962 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:49:32.457419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:49:32.462741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:49:32.468942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:49:32.469133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:49:32.477107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:49:32.479292 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 15:49:32.485312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:49:32.485615 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:49:32.495709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:49:32.495993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:49:32.503561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:49:32.513065 augenrules[1499]: No rules Jan 30 15:49:32.516214 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:49:32.522018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 15:49:32.526009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:49:32.527921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:49:32.529148 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:49:32.530143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:49:32.530304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:49:32.538292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:49:32.538479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:49:32.544138 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 15:49:32.557405 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 15:49:32.560153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 15:49:32.564646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:49:32.565478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 15:49:32.571042 systemd-resolved[1475]: Positive Trust Anchors: Jan 30 15:49:32.571369 systemd-resolved[1475]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 15:49:32.571413 systemd-resolved[1475]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 15:49:32.576107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 15:49:32.577484 systemd-resolved[1475]: Using system hostname 'ci-4081-3-0-c-6e27ecb2ae.novalocal'. Jan 30 15:49:32.581968 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 15:49:32.588137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 15:49:32.595397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 15:49:32.601954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 15:49:32.604004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 15:49:32.605778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 15:49:32.609811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 15:49:32.610031 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 15:49:32.612667 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 15:49:32.612874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 15:49:32.616915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 15:49:32.617268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 15:49:32.622435 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 15:49:32.627235 systemd[1]: Finished ensure-sysext.service. Jan 30 15:49:32.633523 systemd[1]: Reached target network.target - Network. Jan 30 15:49:32.635096 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 15:49:32.636944 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 15:49:32.637056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 15:49:32.642053 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 15:49:32.685738 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 15:49:32.688379 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 15:49:32.711715 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 15:49:32.712605 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 15:49:32.713273 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 15:49:32.713782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 15:49:32.716699 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 15:49:32.718672 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 15:49:32.718789 systemd[1]: Reached target paths.target - Path Units. Jan 30 15:49:32.720754 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 15:49:32.723573 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 15:49:32.725521 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 15:49:32.726931 systemd[1]: Reached target timers.target - Timer Units. Jan 30 15:49:32.730974 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 15:49:32.738700 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 15:49:32.748725 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 15:49:32.750104 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 15:49:32.753534 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 15:49:32.755710 systemd[1]: Reached target basic.target - Basic System. Jan 30 15:49:32.758172 systemd[1]: System is tainted: cgroupsv1 Jan 30 15:49:32.758249 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:49:32.758292 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 15:49:32.765009 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 15:49:32.770990 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 15:49:32.777749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 15:49:32.786948 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 15:49:32.795370 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 15:49:32.801639 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 15:49:33.481613 systemd-resolved[1475]: Clock change detected. Flushing caches. Jan 30 15:49:33.481865 systemd-timesyncd[1532]: Contacted time server 188.165.49.6:123 (0.flatcar.pool.ntp.org). Jan 30 15:49:33.481923 systemd-timesyncd[1532]: Initial clock synchronization to Thu 2025-01-30 15:49:33.481565 UTC. Jan 30 15:49:33.487443 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 15:49:33.494830 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 15:49:33.505292 jq[1542]: false Jan 30 15:49:33.515173 dbus-daemon[1541]: [system] SELinux support is enabled Jan 30 15:49:33.515992 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 15:49:33.525694 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 15:49:33.533834 extend-filesystems[1544]: Found loop4 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found loop5 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found loop6 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found loop7 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found vda Jan 30 15:49:33.536179 extend-filesystems[1544]: Found vda1 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found vda2 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found vda3 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found usr Jan 30 15:49:33.536179 extend-filesystems[1544]: Found vda4 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found vda6 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found vda7 Jan 30 15:49:33.536179 extend-filesystems[1544]: Found vda9 Jan 30 15:49:33.536179 extend-filesystems[1544]: Checking size of /dev/vda9 Jan 30 15:49:33.546674 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 15:49:33.554966 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 15:49:33.556668 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 15:49:33.569475 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 15:49:33.586361 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 15:49:33.594398 jq[1564]: true Jan 30 15:49:33.598077 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 15:49:33.598632 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 15:49:33.598933 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 15:49:33.599130 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 15:49:33.612861 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 15:49:33.621425 extend-filesystems[1544]: Resized partition /dev/vda9 Jan 30 15:49:33.622764 update_engine[1563]: I20250130 15:49:33.614125 1563 main.cc:92] Flatcar Update Engine starting Jan 30 15:49:33.622764 update_engine[1563]: I20250130 15:49:33.619868 1563 update_check_scheduler.cc:74] Next update check in 4m45s Jan 30 15:49:33.613094 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 15:49:33.630011 extend-filesystems[1574]: resize2fs 1.47.1 (20-May-2024) Jan 30 15:49:33.646207 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 15:49:33.651521 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 30 15:49:33.657974 systemd[1]: Started update-engine.service - Update Engine. Jan 30 15:49:33.660305 jq[1575]: true Jan 30 15:49:33.662520 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1204) Jan 30 15:49:33.684675 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 30 15:49:33.691346 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 15:49:33.754101 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 15:49:33.754101 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 15:49:33.754101 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 30 15:49:33.784457 tar[1573]: linux-amd64/helm Jan 30 15:49:33.699727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 15:49:33.784830 extend-filesystems[1544]: Resized filesystem in /dev/vda9 Jan 30 15:49:33.699755 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 15:49:33.700563 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 15:49:33.700582 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 15:49:33.705574 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 15:49:33.712682 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 15:49:33.725578 systemd-networkd[1214]: eth0: Gained IPv6LL Jan 30 15:49:33.729593 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 15:49:33.731031 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 15:49:33.749791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:49:33.752013 systemd-logind[1561]: New seat seat0. Jan 30 15:49:33.758675 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 15:49:33.771594 systemd-logind[1561]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 15:49:33.771610 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 15:49:33.773662 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 15:49:33.773907 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 15:49:33.779032 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 15:49:33.815559 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 15:49:33.829037 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:49:33.826525 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 15:49:33.846174 systemd[1]: Starting sshkeys.service... Jan 30 15:49:33.860674 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 15:49:33.877631 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 15:49:33.895876 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 15:49:33.949913 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 15:49:33.968420 locksmithd[1590]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 15:49:33.969137 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 15:49:33.978834 systemd[1]: Started sshd@0-172.24.4.96:22-172.24.4.1:32970.service - OpenSSH per-connection server daemon (172.24.4.1:32970). Jan 30 15:49:33.991808 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 15:49:33.992041 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 15:49:34.005876 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 15:49:34.044106 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 15:49:34.056231 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 15:49:34.073176 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 15:49:34.075070 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 15:49:34.224581 containerd[1577]: time="2025-01-30T15:49:34.224475920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 15:49:34.270269 containerd[1577]: time="2025-01-30T15:49:34.269797270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:49:34.272747 containerd[1577]: time="2025-01-30T15:49:34.272597672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:49:34.272747 containerd[1577]: time="2025-01-30T15:49:34.272632057Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 15:49:34.272747 containerd[1577]: time="2025-01-30T15:49:34.272649149Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 15:49:34.273081 containerd[1577]: time="2025-01-30T15:49:34.272867117Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 15:49:34.273081 containerd[1577]: time="2025-01-30T15:49:34.272893206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273081 containerd[1577]: time="2025-01-30T15:49:34.272969469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273081 containerd[1577]: time="2025-01-30T15:49:34.272986121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273361 containerd[1577]: time="2025-01-30T15:49:34.273225500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273361 containerd[1577]: time="2025-01-30T15:49:34.273251809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273361 containerd[1577]: time="2025-01-30T15:49:34.273267077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273361 containerd[1577]: time="2025-01-30T15:49:34.273278028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273361 containerd[1577]: time="2025-01-30T15:49:34.273357697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273683 containerd[1577]: time="2025-01-30T15:49:34.273594552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273907 containerd[1577]: time="2025-01-30T15:49:34.273736849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 15:49:34.273907 containerd[1577]: time="2025-01-30T15:49:34.273758960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 15:49:34.273907 containerd[1577]: time="2025-01-30T15:49:34.273842146Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 15:49:34.273907 containerd[1577]: time="2025-01-30T15:49:34.273891579Z" level=info msg="metadata content store policy set" policy=shared Jan 30 15:49:34.284730 containerd[1577]: time="2025-01-30T15:49:34.284692155Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 15:49:34.284814 containerd[1577]: time="2025-01-30T15:49:34.284785210Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 15:49:34.284839 containerd[1577]: time="2025-01-30T15:49:34.284814254Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 15:49:34.285054 containerd[1577]: time="2025-01-30T15:49:34.284873305Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 15:49:34.285054 containerd[1577]: time="2025-01-30T15:49:34.284898683Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 15:49:34.285102 containerd[1577]: time="2025-01-30T15:49:34.285057510Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 15:49:34.285666 containerd[1577]: time="2025-01-30T15:49:34.285571274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285765789Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285825330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285845649Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285861799Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285877037Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285910130Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285927943Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285944594Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285959993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.285993496Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.286010728Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.286033681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.286051034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286123 containerd[1577]: time="2025-01-30T15:49:34.286085699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286102009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286116737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286132356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286166731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286183783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286199572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286216965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286251380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286267339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.286409 containerd[1577]: time="2025-01-30T15:49:34.286281807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287100302Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287137822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287171786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287187115Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287259550Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287280650Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287292903Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287306048Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287316978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287348818Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287361311Z" level=info msg="NRI interface is disabled by configuration." Jan 30 15:49:34.287460 containerd[1577]: time="2025-01-30T15:49:34.287372923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 15:49:34.289815 containerd[1577]: time="2025-01-30T15:49:34.287848395Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 15:49:34.289815 containerd[1577]: time="2025-01-30T15:49:34.287937752Z" level=info msg="Connect containerd service" Jan 30 15:49:34.289815 containerd[1577]: time="2025-01-30T15:49:34.287986243Z" level=info msg="using legacy CRI server" Jan 30 15:49:34.289815 containerd[1577]: time="2025-01-30T15:49:34.288014106Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 15:49:34.289815 containerd[1577]: time="2025-01-30T15:49:34.288135082Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 15:49:34.289815 containerd[1577]: time="2025-01-30T15:49:34.289553082Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:49:34.290058 containerd[1577]: time="2025-01-30T15:49:34.289926272Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 15:49:34.290113 containerd[1577]: time="2025-01-30T15:49:34.290076053Z" level=info msg="Start subscribing containerd event" Jan 30 15:49:34.290144 containerd[1577]: time="2025-01-30T15:49:34.290117150Z" level=info msg="Start recovering state" Jan 30 15:49:34.290191 containerd[1577]: time="2025-01-30T15:49:34.290169568Z" level=info msg="Start event monitor" Jan 30 15:49:34.290191 containerd[1577]: time="2025-01-30T15:49:34.290190107Z" level=info msg="Start snapshots syncer" Jan 30 15:49:34.290240 containerd[1577]: time="2025-01-30T15:49:34.290199765Z" level=info msg="Start cni network conf syncer for default" Jan 30 15:49:34.290240 containerd[1577]: time="2025-01-30T15:49:34.290210104Z" level=info msg="Start streaming server" Jan 30 15:49:34.291510 containerd[1577]: time="2025-01-30T15:49:34.291040431Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 15:49:34.291704 containerd[1577]: time="2025-01-30T15:49:34.291676484Z" level=info msg="containerd successfully booted in 0.067900s" Jan 30 15:49:34.291750 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 15:49:34.402220 tar[1573]: linux-amd64/LICENSE Jan 30 15:49:34.402406 tar[1573]: linux-amd64/README.md Jan 30 15:49:34.416107 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 15:49:34.879214 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 32970 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:49:34.885923 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:49:34.915489 systemd-logind[1561]: New session 1 of user core. Jan 30 15:49:34.917319 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 15:49:34.932106 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 15:49:34.950798 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 15:49:34.962917 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 15:49:34.977091 (systemd)[1672]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 15:49:35.094768 systemd[1672]: Queued start job for default target default.target. Jan 30 15:49:35.095084 systemd[1672]: Created slice app.slice - User Application Slice. Jan 30 15:49:35.095105 systemd[1672]: Reached target paths.target - Paths. Jan 30 15:49:35.095119 systemd[1672]: Reached target timers.target - Timers. Jan 30 15:49:35.105616 systemd[1672]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 15:49:35.112342 systemd[1672]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 15:49:35.112394 systemd[1672]: Reached target sockets.target - Sockets. Jan 30 15:49:35.112409 systemd[1672]: Reached target basic.target - Basic System. Jan 30 15:49:35.112442 systemd[1672]: Reached target default.target - Main User Target. Jan 30 15:49:35.112467 systemd[1672]: Startup finished in 129ms. Jan 30 15:49:35.112585 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 15:49:35.119702 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 15:49:35.466713 systemd[1]: Started sshd@1-172.24.4.96:22-172.24.4.1:32974.service - OpenSSH per-connection server daemon (172.24.4.1:32974). Jan 30 15:49:35.710761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:49:35.731604 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:49:37.217285 kubelet[1693]: E0130 15:49:37.217156 1693 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:49:37.220581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:49:37.220966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:49:37.683405 sshd[1684]: Accepted publickey for core from 172.24.4.1 port 32974 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:49:37.686184 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:49:37.696122 systemd-logind[1561]: New session 2 of user core. Jan 30 15:49:37.706279 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 15:49:38.455594 sshd[1684]: pam_unix(sshd:session): session closed for user core Jan 30 15:49:38.470803 systemd[1]: Started sshd@2-172.24.4.96:22-172.24.4.1:32980.service - OpenSSH per-connection server daemon (172.24.4.1:32980). Jan 30 15:49:38.476471 systemd[1]: sshd@1-172.24.4.96:22-172.24.4.1:32974.service: Deactivated successfully. Jan 30 15:49:38.489692 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 15:49:38.493300 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Jan 30 15:49:38.496831 systemd-logind[1561]: Removed session 2. Jan 30 15:49:39.106222 login[1651]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:49:39.121333 systemd-logind[1561]: New session 3 of user core. Jan 30 15:49:39.125194 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 15:49:39.129858 login[1652]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 15:49:39.147930 systemd-logind[1561]: New session 4 of user core. Jan 30 15:49:39.157111 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 15:49:39.834863 sshd[1708]: Accepted publickey for core from 172.24.4.1 port 32980 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:49:39.837671 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:49:39.847165 systemd-logind[1561]: New session 5 of user core. Jan 30 15:49:39.856308 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 15:49:40.550895 coreos-metadata[1540]: Jan 30 15:49:40.550 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:49:40.598923 coreos-metadata[1540]: Jan 30 15:49:40.598 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 15:49:40.700064 sshd[1708]: pam_unix(sshd:session): session closed for user core Jan 30 15:49:40.705452 systemd[1]: sshd@2-172.24.4.96:22-172.24.4.1:32980.service: Deactivated successfully. Jan 30 15:49:40.712290 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Jan 30 15:49:40.713390 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 15:49:40.717103 systemd-logind[1561]: Removed session 5. Jan 30 15:49:40.759562 coreos-metadata[1540]: Jan 30 15:49:40.759 INFO Fetch successful Jan 30 15:49:40.759562 coreos-metadata[1540]: Jan 30 15:49:40.759 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 15:49:40.774798 coreos-metadata[1540]: Jan 30 15:49:40.774 INFO Fetch successful Jan 30 15:49:40.774798 coreos-metadata[1540]: Jan 30 15:49:40.774 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 15:49:40.789699 coreos-metadata[1540]: Jan 30 15:49:40.789 INFO Fetch successful Jan 30 15:49:40.789699 coreos-metadata[1540]: Jan 30 15:49:40.789 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 15:49:40.802937 coreos-metadata[1540]: Jan 30 15:49:40.802 INFO Fetch successful Jan 30 15:49:40.802937 coreos-metadata[1540]: Jan 30 15:49:40.802 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 15:49:40.816123 coreos-metadata[1540]: Jan 30 15:49:40.816 INFO Fetch successful Jan 30 15:49:40.816123 coreos-metadata[1540]: Jan 30 15:49:40.816 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 15:49:40.828953 coreos-metadata[1540]: Jan 30 15:49:40.828 INFO Fetch successful Jan 30 15:49:40.868464 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 15:49:40.871400 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 15:49:41.020976 coreos-metadata[1633]: Jan 30 15:49:41.020 WARN failed to locate config-drive, using the metadata service API instead Jan 30 15:49:41.063748 coreos-metadata[1633]: Jan 30 15:49:41.063 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 15:49:41.079256 coreos-metadata[1633]: Jan 30 15:49:41.079 INFO Fetch successful Jan 30 15:49:41.079256 coreos-metadata[1633]: Jan 30 15:49:41.079 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 15:49:41.092317 coreos-metadata[1633]: Jan 30 15:49:41.092 INFO Fetch successful Jan 30 15:49:41.097873 unknown[1633]: wrote ssh authorized keys file for user: core Jan 30 15:49:41.135736 update-ssh-keys[1759]: Updated "/home/core/.ssh/authorized_keys" Jan 30 15:49:41.136837 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 15:49:41.144834 systemd[1]: Finished sshkeys.service. Jan 30 15:49:41.152163 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 15:49:41.152432 systemd[1]: Startup finished in 17.179s (kernel) + 11.725s (userspace) = 28.904s. Jan 30 15:49:47.298349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 15:49:47.305830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:49:47.648706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:49:47.653523 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:49:47.778043 kubelet[1777]: E0130 15:49:47.777916 1777 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:49:47.785919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:49:47.786306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:49:50.713964 systemd[1]: Started sshd@3-172.24.4.96:22-172.24.4.1:43930.service - OpenSSH per-connection server daemon (172.24.4.1:43930). Jan 30 15:49:52.098528 sshd[1786]: Accepted publickey for core from 172.24.4.1 port 43930 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:49:52.101579 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:49:52.114022 systemd-logind[1561]: New session 6 of user core. Jan 30 15:49:52.122699 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 15:49:52.826825 sshd[1786]: pam_unix(sshd:session): session closed for user core Jan 30 15:49:52.839719 systemd[1]: Started sshd@4-172.24.4.96:22-172.24.4.1:43942.service - OpenSSH per-connection server daemon (172.24.4.1:43942). Jan 30 15:49:52.840815 systemd[1]: sshd@3-172.24.4.96:22-172.24.4.1:43930.service: Deactivated successfully. Jan 30 15:49:52.852187 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 15:49:52.854319 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 30 15:49:52.859728 systemd-logind[1561]: Removed session 6. Jan 30 15:49:54.261415 sshd[1791]: Accepted publickey for core from 172.24.4.1 port 43942 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:49:54.264027 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:49:54.275074 systemd-logind[1561]: New session 7 of user core. Jan 30 15:49:54.283010 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 15:49:54.967857 sshd[1791]: pam_unix(sshd:session): session closed for user core Jan 30 15:49:54.981281 systemd[1]: Started sshd@5-172.24.4.96:22-172.24.4.1:58828.service - OpenSSH per-connection server daemon (172.24.4.1:58828). Jan 30 15:49:54.982350 systemd[1]: sshd@4-172.24.4.96:22-172.24.4.1:43942.service: Deactivated successfully. Jan 30 15:49:54.990897 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 15:49:54.995884 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 30 15:49:54.998751 systemd-logind[1561]: Removed session 7. Jan 30 15:49:56.547402 sshd[1799]: Accepted publickey for core from 172.24.4.1 port 58828 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:49:56.550382 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:49:56.561059 systemd-logind[1561]: New session 8 of user core. Jan 30 15:49:56.569037 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 15:49:57.189895 sshd[1799]: pam_unix(sshd:session): session closed for user core Jan 30 15:49:57.208259 systemd[1]: Started sshd@6-172.24.4.96:22-172.24.4.1:58840.service - OpenSSH per-connection server daemon (172.24.4.1:58840). Jan 30 15:49:57.209389 systemd[1]: sshd@5-172.24.4.96:22-172.24.4.1:58828.service: Deactivated successfully. Jan 30 15:49:57.218148 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 15:49:57.220897 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 30 15:49:57.225028 systemd-logind[1561]: Removed session 8. Jan 30 15:49:57.798639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 15:49:57.812032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:49:58.119883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:49:58.132154 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:49:58.214096 kubelet[1824]: E0130 15:49:58.214020 1824 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:49:58.216244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:49:58.216955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:49:58.481295 sshd[1807]: Accepted publickey for core from 172.24.4.1 port 58840 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:49:58.483758 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:49:58.493930 systemd-logind[1561]: New session 9 of user core. Jan 30 15:49:58.506192 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 15:49:58.930789 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 15:49:58.931459 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:49:58.953036 sudo[1835]: pam_unix(sudo:session): session closed for user root Jan 30 15:49:59.198678 sshd[1807]: pam_unix(sshd:session): session closed for user core Jan 30 15:49:59.212156 systemd[1]: Started sshd@7-172.24.4.96:22-172.24.4.1:58852.service - OpenSSH per-connection server daemon (172.24.4.1:58852). Jan 30 15:49:59.213238 systemd[1]: sshd@6-172.24.4.96:22-172.24.4.1:58840.service: Deactivated successfully. Jan 30 15:49:59.219812 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 15:49:59.221797 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 30 15:49:59.226904 systemd-logind[1561]: Removed session 9. Jan 30 15:50:00.633524 sshd[1837]: Accepted publickey for core from 172.24.4.1 port 58852 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:50:00.637432 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:50:00.648125 systemd-logind[1561]: New session 10 of user core. Jan 30 15:50:00.655797 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 15:50:01.197227 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 15:50:01.197927 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:50:01.205418 sudo[1845]: pam_unix(sudo:session): session closed for user root Jan 30 15:50:01.217176 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 15:50:01.217887 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:50:01.241032 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 15:50:01.256866 auditctl[1848]: No rules Jan 30 15:50:01.257724 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 15:50:01.258250 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 15:50:01.269328 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 15:50:01.340029 augenrules[1867]: No rules Jan 30 15:50:01.344055 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 15:50:01.347874 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 30 15:50:01.500370 sshd[1837]: pam_unix(sshd:session): session closed for user core Jan 30 15:50:01.513194 systemd[1]: Started sshd@8-172.24.4.96:22-172.24.4.1:58858.service - OpenSSH per-connection server daemon (172.24.4.1:58858). Jan 30 15:50:01.514310 systemd[1]: sshd@7-172.24.4.96:22-172.24.4.1:58852.service: Deactivated successfully. Jan 30 15:50:01.519351 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 15:50:01.522587 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Jan 30 15:50:01.527040 systemd-logind[1561]: Removed session 10. Jan 30 15:50:02.675010 sshd[1874]: Accepted publickey for core from 172.24.4.1 port 58858 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:50:02.678303 sshd[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:50:02.691586 systemd-logind[1561]: New session 11 of user core. Jan 30 15:50:02.698055 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 15:50:03.146690 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 15:50:03.147351 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 15:50:03.774877 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 15:50:03.797413 (dockerd)[1897]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 15:50:04.482765 dockerd[1897]: time="2025-01-30T15:50:04.482673151Z" level=info msg="Starting up" Jan 30 15:50:04.913186 dockerd[1897]: time="2025-01-30T15:50:04.912196111Z" level=info msg="Loading containers: start." Jan 30 15:50:05.079562 kernel: Initializing XFRM netlink socket Jan 30 15:50:05.189206 systemd-networkd[1214]: docker0: Link UP Jan 30 15:50:05.215929 dockerd[1897]: time="2025-01-30T15:50:05.215868709Z" level=info msg="Loading containers: done." Jan 30 15:50:05.234610 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck493029169-merged.mount: Deactivated successfully. Jan 30 15:50:05.240839 dockerd[1897]: time="2025-01-30T15:50:05.240774109Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 15:50:05.241095 dockerd[1897]: time="2025-01-30T15:50:05.240998330Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 15:50:05.241292 dockerd[1897]: time="2025-01-30T15:50:05.241253348Z" level=info msg="Daemon has completed initialization" Jan 30 15:50:05.314947 dockerd[1897]: time="2025-01-30T15:50:05.314760490Z" level=info msg="API listen on /run/docker.sock" Jan 30 15:50:05.317672 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 15:50:07.056281 containerd[1577]: time="2025-01-30T15:50:07.056206551Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 15:50:07.822671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77222301.mount: Deactivated successfully. Jan 30 15:50:08.297576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 15:50:08.303453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:50:08.412707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:50:08.426820 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:50:08.706601 kubelet[2102]: E0130 15:50:08.706297 2102 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:50:08.708559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:50:08.708779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:50:09.817333 containerd[1577]: time="2025-01-30T15:50:09.817275929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:09.819092 containerd[1577]: time="2025-01-30T15:50:09.818779699Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 30 15:50:09.820306 containerd[1577]: time="2025-01-30T15:50:09.820276706Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:09.824872 containerd[1577]: time="2025-01-30T15:50:09.824838912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:09.826154 containerd[1577]: time="2025-01-30T15:50:09.826116780Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.769866937s" Jan 30 15:50:09.826242 containerd[1577]: time="2025-01-30T15:50:09.826225543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 15:50:09.847424 containerd[1577]: time="2025-01-30T15:50:09.847387129Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 15:50:12.010583 containerd[1577]: time="2025-01-30T15:50:12.010460109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:12.011885 containerd[1577]: time="2025-01-30T15:50:12.011698685Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 30 15:50:12.013043 containerd[1577]: time="2025-01-30T15:50:12.012988377Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:12.016354 containerd[1577]: time="2025-01-30T15:50:12.016294029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:12.017754 containerd[1577]: time="2025-01-30T15:50:12.017394387Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.169973275s" Jan 30 15:50:12.017754 containerd[1577]: time="2025-01-30T15:50:12.017429713Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 15:50:12.042666 containerd[1577]: time="2025-01-30T15:50:12.042420244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 15:50:13.597031 containerd[1577]: time="2025-01-30T15:50:13.596871190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:13.598256 containerd[1577]: time="2025-01-30T15:50:13.598114777Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 30 15:50:13.599546 containerd[1577]: time="2025-01-30T15:50:13.599477816Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:13.602981 containerd[1577]: time="2025-01-30T15:50:13.602920334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:13.604297 containerd[1577]: time="2025-01-30T15:50:13.604048164Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.56159053s" Jan 30 15:50:13.604297 containerd[1577]: time="2025-01-30T15:50:13.604082999Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 15:50:13.625724 containerd[1577]: time="2025-01-30T15:50:13.625668079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 15:50:14.994949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748859531.mount: Deactivated successfully. Jan 30 15:50:15.786233 containerd[1577]: time="2025-01-30T15:50:15.786108756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:15.788665 containerd[1577]: time="2025-01-30T15:50:15.788492095Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 15:50:15.790854 containerd[1577]: time="2025-01-30T15:50:15.790750199Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:15.795873 containerd[1577]: time="2025-01-30T15:50:15.795777364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:15.798012 containerd[1577]: time="2025-01-30T15:50:15.797709559Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.171981988s" Jan 30 15:50:15.798012 containerd[1577]: time="2025-01-30T15:50:15.797804257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 15:50:15.851120 containerd[1577]: time="2025-01-30T15:50:15.850712574Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 15:50:16.510589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333764224.mount: Deactivated successfully. Jan 30 15:50:17.654448 containerd[1577]: time="2025-01-30T15:50:17.654404821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:17.656925 containerd[1577]: time="2025-01-30T15:50:17.656881156Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 15:50:17.659111 containerd[1577]: time="2025-01-30T15:50:17.659090220Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:17.662806 containerd[1577]: time="2025-01-30T15:50:17.662759306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:17.663902 containerd[1577]: time="2025-01-30T15:50:17.663876527Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.813101687s" Jan 30 15:50:17.663990 containerd[1577]: time="2025-01-30T15:50:17.663973548Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 15:50:17.688287 containerd[1577]: time="2025-01-30T15:50:17.688258979Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 15:50:18.275676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097712180.mount: Deactivated successfully. Jan 30 15:50:18.286531 containerd[1577]: time="2025-01-30T15:50:18.286349464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:18.288487 containerd[1577]: time="2025-01-30T15:50:18.288384753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 30 15:50:18.290112 containerd[1577]: time="2025-01-30T15:50:18.289975901Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:18.297649 containerd[1577]: time="2025-01-30T15:50:18.297572380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:18.300033 containerd[1577]: time="2025-01-30T15:50:18.299962333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 611.517606ms" Jan 30 15:50:18.300190 containerd[1577]: time="2025-01-30T15:50:18.300036011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 15:50:18.346362 containerd[1577]: time="2025-01-30T15:50:18.346194868Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 15:50:18.798213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 15:50:18.811915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:50:19.262483 update_engine[1563]: I20250130 15:50:19.262261 1563 update_attempter.cc:509] Updating boot flags... Jan 30 15:50:20.027561 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2219) Jan 30 15:50:20.489861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:50:20.503188 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 15:50:20.600545 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2222) Jan 30 15:50:20.649053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197276523.mount: Deactivated successfully. Jan 30 15:50:20.664556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2222) Jan 30 15:50:20.665950 kubelet[2233]: E0130 15:50:20.665908 2233 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 15:50:20.674353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 15:50:20.674631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 15:50:23.161431 containerd[1577]: time="2025-01-30T15:50:23.161330418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:23.168043 containerd[1577]: time="2025-01-30T15:50:23.167518297Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 30 15:50:23.168043 containerd[1577]: time="2025-01-30T15:50:23.167860177Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:23.171317 containerd[1577]: time="2025-01-30T15:50:23.171275883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:50:23.172635 containerd[1577]: time="2025-01-30T15:50:23.172598391Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.826366884s" Jan 30 15:50:23.172684 containerd[1577]: time="2025-01-30T15:50:23.172635320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 15:50:27.584150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:50:27.592761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:50:27.624729 systemd[1]: Reloading requested from client PID 2353 ('systemctl') (unit session-11.scope)... Jan 30 15:50:27.624864 systemd[1]: Reloading... Jan 30 15:50:27.718592 zram_generator::config[2392]: No configuration found. Jan 30 15:50:27.873434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:50:27.948114 systemd[1]: Reloading finished in 322 ms. Jan 30 15:50:27.998245 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 15:50:27.998322 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 15:50:27.998689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:50:28.010791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:50:28.118007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:50:28.122849 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:50:28.173926 kubelet[2468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:50:28.173926 kubelet[2468]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:50:28.173926 kubelet[2468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:50:28.327635 kubelet[2468]: I0130 15:50:28.326533 2468 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:50:29.005170 kubelet[2468]: I0130 15:50:29.004903 2468 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 15:50:29.005170 kubelet[2468]: I0130 15:50:29.004946 2468 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:50:29.008356 kubelet[2468]: I0130 15:50:29.008325 2468 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 15:50:29.033047 kubelet[2468]: I0130 15:50:29.032981 2468 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:50:29.035615 kubelet[2468]: E0130 15:50:29.035439 2468 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.051267 kubelet[2468]: I0130 15:50:29.051231 2468 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:50:29.052542 kubelet[2468]: I0130 15:50:29.052073 2468 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:50:29.052542 kubelet[2468]: I0130 15:50:29.052132 2468 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-c-6e27ecb2ae.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 15:50:29.053863 kubelet[2468]: I0130 15:50:29.053822 2468 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:50:29.053908 kubelet[2468]: I0130 15:50:29.053868 2468 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 15:50:29.054091 kubelet[2468]: I0130 15:50:29.054063 2468 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:50:29.056026 kubelet[2468]: I0130 15:50:29.056000 2468 kubelet.go:400] "Attempting to sync node with API server" Jan 30 15:50:29.056080 kubelet[2468]: I0130 15:50:29.056036 2468 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:50:29.056080 kubelet[2468]: I0130 15:50:29.056073 2468 kubelet.go:312] "Adding apiserver pod source" Jan 30 15:50:29.056125 kubelet[2468]: I0130 15:50:29.056097 2468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:50:29.065224 kubelet[2468]: W0130 15:50:29.064894 2468 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.065224 kubelet[2468]: E0130 15:50:29.064994 2468 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.067340 kubelet[2468]: W0130 15:50:29.067228 2468 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-6e27ecb2ae.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.067340 kubelet[2468]: E0130 15:50:29.067317 2468 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-6e27ecb2ae.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.067554 kubelet[2468]: I0130 15:50:29.067491 2468 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:50:29.070660 kubelet[2468]: I0130 15:50:29.070622 2468 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:50:29.070728 kubelet[2468]: W0130 15:50:29.070705 2468 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 15:50:29.071857 kubelet[2468]: I0130 15:50:29.071766 2468 server.go:1264] "Started kubelet" Jan 30 15:50:29.076205 kubelet[2468]: I0130 15:50:29.076171 2468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:50:29.078087 kubelet[2468]: E0130 15:50:29.077990 2468 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.96:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.96:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-c-6e27ecb2ae.novalocal.181f833153d9cbcb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-c-6e27ecb2ae.novalocal,UID:ci-4081-3-0-c-6e27ecb2ae.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-c-6e27ecb2ae.novalocal,},FirstTimestamp:2025-01-30 15:50:29.071719371 +0000 UTC m=+0.945228978,LastTimestamp:2025-01-30 15:50:29.071719371 +0000 UTC m=+0.945228978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-c-6e27ecb2ae.novalocal,}" Jan 30 15:50:29.078941 kubelet[2468]: I0130 15:50:29.078220 2468 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:50:29.079554 kubelet[2468]: I0130 15:50:29.079543 2468 server.go:455] "Adding debug handlers to kubelet server" Jan 30 15:50:29.081159 kubelet[2468]: I0130 15:50:29.081114 2468 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:50:29.081732 kubelet[2468]: I0130 15:50:29.081720 2468 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:50:29.082081 kubelet[2468]: I0130 15:50:29.082051 2468 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:50:29.088636 kubelet[2468]: I0130 15:50:29.088602 2468 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:50:29.088830 kubelet[2468]: I0130 15:50:29.088794 2468 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:50:29.090375 kubelet[2468]: I0130 15:50:29.081697 2468 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 15:50:29.095463 kubelet[2468]: W0130 15:50:29.095405 2468 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.095463 kubelet[2468]: E0130 15:50:29.095466 2468 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.095638 kubelet[2468]: E0130 15:50:29.095538 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-6e27ecb2ae.novalocal?timeout=10s\": dial tcp 172.24.4.96:6443: connect: connection refused" interval="200ms" Jan 30 15:50:29.098767 kubelet[2468]: I0130 15:50:29.097655 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:50:29.098767 kubelet[2468]: I0130 15:50:29.098131 2468 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:50:29.099576 kubelet[2468]: I0130 15:50:29.098960 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:50:29.099576 kubelet[2468]: I0130 15:50:29.098989 2468 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:50:29.099576 kubelet[2468]: I0130 15:50:29.099005 2468 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 15:50:29.099576 kubelet[2468]: E0130 15:50:29.099035 2468 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:50:29.102585 kubelet[2468]: E0130 15:50:29.102492 2468 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:50:29.103475 kubelet[2468]: I0130 15:50:29.103452 2468 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:50:29.104433 kubelet[2468]: W0130 15:50:29.104384 2468 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.104640 kubelet[2468]: E0130 15:50:29.104628 2468 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:29.131026 kubelet[2468]: I0130 15:50:29.130973 2468 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:50:29.131026 kubelet[2468]: I0130 15:50:29.131022 2468 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:50:29.131148 kubelet[2468]: I0130 15:50:29.131051 2468 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:50:29.135514 kubelet[2468]: I0130 15:50:29.135470 2468 policy_none.go:49] "None policy: Start" Jan 30 15:50:29.136517 kubelet[2468]: I0130 15:50:29.136462 2468 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:50:29.136634 kubelet[2468]: I0130 15:50:29.136585 2468 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:50:29.145000 kubelet[2468]: I0130 15:50:29.144977 2468 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:50:29.148193 kubelet[2468]: I0130 15:50:29.147890 2468 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:50:29.148193 kubelet[2468]: I0130 15:50:29.148061 2468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:50:29.150012 kubelet[2468]: E0130 15:50:29.149972 2468 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:29.184554 kubelet[2468]: I0130 15:50:29.183981 2468 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.184554 kubelet[2468]: E0130 15:50:29.184445 2468 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.96:6443/api/v1/nodes\": dial tcp 172.24.4.96:6443: connect: connection refused" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.199680 kubelet[2468]: I0130 15:50:29.199626 2468 topology_manager.go:215] "Topology Admit Handler" podUID="2a2ba3fab3fb2754b493591f237716fe" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.201516 kubelet[2468]: I0130 15:50:29.201477 2468 topology_manager.go:215] "Topology Admit Handler" podUID="5ff52e75ad7260ce851b3bc3686e588e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.203338 kubelet[2468]: I0130 15:50:29.203209 2468 topology_manager.go:215] "Topology Admit Handler" podUID="e1f6672c24080b67ffdc13398623a178" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.296918 kubelet[2468]: E0130 15:50:29.296704 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-6e27ecb2ae.novalocal?timeout=10s\": dial tcp 172.24.4.96:6443: connect: connection refused" interval="400ms" Jan 30 15:50:29.300127 kubelet[2468]: I0130 15:50:29.299232 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a2ba3fab3fb2754b493591f237716fe-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"2a2ba3fab3fb2754b493591f237716fe\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.300127 kubelet[2468]: I0130 15:50:29.299300 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.300127 kubelet[2468]: I0130 15:50:29.299346 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.300127 kubelet[2468]: I0130 15:50:29.299387 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.300876 kubelet[2468]: I0130 15:50:29.299433 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.300876 kubelet[2468]: I0130 15:50:29.299474 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a2ba3fab3fb2754b493591f237716fe-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"2a2ba3fab3fb2754b493591f237716fe\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.300876 kubelet[2468]: I0130 15:50:29.299547 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a2ba3fab3fb2754b493591f237716fe-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"2a2ba3fab3fb2754b493591f237716fe\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.300876 kubelet[2468]: I0130 15:50:29.299589 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.301136 kubelet[2468]: I0130 15:50:29.299633 2468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1f6672c24080b67ffdc13398623a178-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"e1f6672c24080b67ffdc13398623a178\") " pod="kube-system/kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.388543 kubelet[2468]: I0130 15:50:29.388168 2468 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.388988 kubelet[2468]: E0130 15:50:29.388935 2468 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.96:6443/api/v1/nodes\": dial tcp 172.24.4.96:6443: connect: connection refused" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.512146 containerd[1577]: time="2025-01-30T15:50:29.511588048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal,Uid:2a2ba3fab3fb2754b493591f237716fe,Namespace:kube-system,Attempt:0,}" Jan 30 15:50:29.512999 containerd[1577]: time="2025-01-30T15:50:29.512655108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal,Uid:e1f6672c24080b67ffdc13398623a178,Namespace:kube-system,Attempt:0,}" Jan 30 15:50:29.513071 containerd[1577]: time="2025-01-30T15:50:29.512997860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal,Uid:5ff52e75ad7260ce851b3bc3686e588e,Namespace:kube-system,Attempt:0,}" Jan 30 15:50:29.698597 kubelet[2468]: E0130 15:50:29.698443 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-6e27ecb2ae.novalocal?timeout=10s\": dial tcp 172.24.4.96:6443: connect: connection refused" interval="800ms" Jan 30 15:50:29.792704 kubelet[2468]: I0130 15:50:29.792638 2468 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:29.793136 kubelet[2468]: E0130 15:50:29.793067 2468 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.96:6443/api/v1/nodes\": dial tcp 172.24.4.96:6443: connect: connection refused" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:30.065249 kubelet[2468]: W0130 15:50:30.065124 2468 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:30.065249 kubelet[2468]: E0130 15:50:30.065206 2468 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:30.124081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505898861.mount: Deactivated successfully. Jan 30 15:50:30.149120 containerd[1577]: time="2025-01-30T15:50:30.148689137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 15:50:30.153770 containerd[1577]: time="2025-01-30T15:50:30.153606813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:50:30.153917 containerd[1577]: time="2025-01-30T15:50:30.153771291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 15:50:30.154809 containerd[1577]: time="2025-01-30T15:50:30.154649095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:50:30.157788 containerd[1577]: time="2025-01-30T15:50:30.157659797Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:50:30.164588 containerd[1577]: time="2025-01-30T15:50:30.163998113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.254915ms" Jan 30 15:50:30.168007 containerd[1577]: time="2025-01-30T15:50:30.167929480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:50:30.170288 containerd[1577]: time="2025-01-30T15:50:30.170207899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.12022ms" Jan 30 15:50:30.171482 containerd[1577]: time="2025-01-30T15:50:30.171376699Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:50:30.173539 containerd[1577]: time="2025-01-30T15:50:30.173320422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 15:50:30.189545 containerd[1577]: time="2025-01-30T15:50:30.186002053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.205571ms" Jan 30 15:50:30.258234 kubelet[2468]: W0130 15:50:30.258118 2468 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-6e27ecb2ae.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:30.258234 kubelet[2468]: E0130 15:50:30.258179 2468 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-c-6e27ecb2ae.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:30.372330 containerd[1577]: time="2025-01-30T15:50:30.371445159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:50:30.373369 containerd[1577]: time="2025-01-30T15:50:30.373334459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:50:30.373586 containerd[1577]: time="2025-01-30T15:50:30.373548260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:30.374041 containerd[1577]: time="2025-01-30T15:50:30.374011608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:30.374200 containerd[1577]: time="2025-01-30T15:50:30.374060430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:50:30.375367 containerd[1577]: time="2025-01-30T15:50:30.374902257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:50:30.375367 containerd[1577]: time="2025-01-30T15:50:30.375078668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:30.375670 containerd[1577]: time="2025-01-30T15:50:30.375578805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:30.383098 containerd[1577]: time="2025-01-30T15:50:30.382946059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:50:30.383098 containerd[1577]: time="2025-01-30T15:50:30.383025728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:50:30.383098 containerd[1577]: time="2025-01-30T15:50:30.383064531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:30.383389 containerd[1577]: time="2025-01-30T15:50:30.383210635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:30.479060 containerd[1577]: time="2025-01-30T15:50:30.478838579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal,Uid:e1f6672c24080b67ffdc13398623a178,Namespace:kube-system,Attempt:0,} returns sandbox id \"e88ee5b4e0031564533225795d5169567d43639fb03e2db268615841f9e4eade\"" Jan 30 15:50:30.479060 containerd[1577]: time="2025-01-30T15:50:30.478950418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal,Uid:5ff52e75ad7260ce851b3bc3686e588e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ff3d80ed413f70ece961bddb448601d316b2b1800249edd69033e1dded3c06b\"" Jan 30 15:50:30.484128 containerd[1577]: time="2025-01-30T15:50:30.483095846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal,Uid:2a2ba3fab3fb2754b493591f237716fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9353f6b89ad1302e05461d2011340f866d87ec72bcf1a4b72159713e47e6c36b\"" Jan 30 15:50:30.484247 containerd[1577]: time="2025-01-30T15:50:30.484216316Z" level=info msg="CreateContainer within sandbox \"3ff3d80ed413f70ece961bddb448601d316b2b1800249edd69033e1dded3c06b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 15:50:30.485895 containerd[1577]: time="2025-01-30T15:50:30.485846099Z" level=info msg="CreateContainer within sandbox \"e88ee5b4e0031564533225795d5169567d43639fb03e2db268615841f9e4eade\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 15:50:30.487157 containerd[1577]: time="2025-01-30T15:50:30.487125858Z" level=info msg="CreateContainer within sandbox \"9353f6b89ad1302e05461d2011340f866d87ec72bcf1a4b72159713e47e6c36b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 15:50:30.499856 kubelet[2468]: E0130 15:50:30.499780 2468 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-c-6e27ecb2ae.novalocal?timeout=10s\": dial tcp 172.24.4.96:6443: connect: connection refused" interval="1.6s" Jan 30 15:50:30.505429 kubelet[2468]: W0130 15:50:30.505364 2468 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:30.505526 kubelet[2468]: E0130 15:50:30.505515 2468 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:30.525678 containerd[1577]: time="2025-01-30T15:50:30.525644843Z" level=info msg="CreateContainer within sandbox \"e88ee5b4e0031564533225795d5169567d43639fb03e2db268615841f9e4eade\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da82181ee5ef47302ab340f48746d3ac30d3d94e4ef1c5964cce4fe939edceca\"" Jan 30 15:50:30.526748 containerd[1577]: time="2025-01-30T15:50:30.526709458Z" level=info msg="StartContainer for \"da82181ee5ef47302ab340f48746d3ac30d3d94e4ef1c5964cce4fe939edceca\"" Jan 30 15:50:30.533866 containerd[1577]: time="2025-01-30T15:50:30.533118397Z" level=info msg="CreateContainer within sandbox \"9353f6b89ad1302e05461d2011340f866d87ec72bcf1a4b72159713e47e6c36b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e80836a9be41acf2e88f14cfb10bf750813c884001d3ed727216feda307bf321\"" Jan 30 15:50:30.533866 containerd[1577]: time="2025-01-30T15:50:30.533726737Z" level=info msg="StartContainer for \"e80836a9be41acf2e88f14cfb10bf750813c884001d3ed727216feda307bf321\"" Jan 30 15:50:30.535278 containerd[1577]: time="2025-01-30T15:50:30.535244821Z" level=info msg="CreateContainer within sandbox \"3ff3d80ed413f70ece961bddb448601d316b2b1800249edd69033e1dded3c06b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ed3c3b1d4bdded0a81cf80926f36a3247e3d7ca1332771e59a9632c7a69818f4\"" Jan 30 15:50:30.535806 containerd[1577]: time="2025-01-30T15:50:30.535765516Z" level=info msg="StartContainer for \"ed3c3b1d4bdded0a81cf80926f36a3247e3d7ca1332771e59a9632c7a69818f4\"" Jan 30 15:50:30.579155 kubelet[2468]: W0130 15:50:30.579008 2468 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:30.579456 kubelet[2468]: E0130 15:50:30.579442 2468 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.96:6443: connect: connection refused Jan 30 15:50:30.597197 kubelet[2468]: I0130 15:50:30.597170 2468 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:30.597819 kubelet[2468]: E0130 15:50:30.597784 2468 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.96:6443/api/v1/nodes\": dial tcp 172.24.4.96:6443: connect: connection refused" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:30.658325 containerd[1577]: time="2025-01-30T15:50:30.658107115Z" level=info msg="StartContainer for \"da82181ee5ef47302ab340f48746d3ac30d3d94e4ef1c5964cce4fe939edceca\" returns successfully" Jan 30 15:50:30.658325 containerd[1577]: time="2025-01-30T15:50:30.658210800Z" level=info msg="StartContainer for \"e80836a9be41acf2e88f14cfb10bf750813c884001d3ed727216feda307bf321\" returns successfully" Jan 30 15:50:30.676124 containerd[1577]: time="2025-01-30T15:50:30.676078289Z" level=info msg="StartContainer for \"ed3c3b1d4bdded0a81cf80926f36a3247e3d7ca1332771e59a9632c7a69818f4\" returns successfully" Jan 30 15:50:32.201909 kubelet[2468]: I0130 15:50:32.201878 2468 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:32.407642 kubelet[2468]: E0130 15:50:32.407578 2468 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:32.513682 kubelet[2468]: I0130 15:50:32.513550 2468 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:32.543105 kubelet[2468]: E0130 15:50:32.543061 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:32.643690 kubelet[2468]: E0130 15:50:32.643653 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:32.744651 kubelet[2468]: E0130 15:50:32.744576 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:32.845853 kubelet[2468]: E0130 15:50:32.845764 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:32.946767 kubelet[2468]: E0130 15:50:32.946680 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:33.046894 kubelet[2468]: E0130 15:50:33.046828 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:33.147693 kubelet[2468]: E0130 15:50:33.147459 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:33.248955 kubelet[2468]: E0130 15:50:33.248789 2468 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" not found" Jan 30 15:50:34.069023 kubelet[2468]: I0130 15:50:34.068589 2468 apiserver.go:52] "Watching apiserver" Jan 30 15:50:34.082628 kubelet[2468]: I0130 15:50:34.082585 2468 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:50:34.807109 systemd[1]: Reloading requested from client PID 2742 ('systemctl') (unit session-11.scope)... Jan 30 15:50:34.807139 systemd[1]: Reloading... Jan 30 15:50:34.918571 zram_generator::config[2781]: No configuration found. Jan 30 15:50:35.061429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 15:50:35.147679 systemd[1]: Reloading finished in 339 ms. Jan 30 15:50:35.177148 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:50:35.177387 kubelet[2468]: I0130 15:50:35.177351 2468 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:50:35.188596 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 15:50:35.188916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:50:35.194291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 15:50:35.477862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 15:50:35.499149 (kubelet)[2855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 15:50:35.587014 kubelet[2855]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:50:35.587014 kubelet[2855]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:50:35.587014 kubelet[2855]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:50:35.587014 kubelet[2855]: I0130 15:50:35.586958 2855 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:50:35.597404 kubelet[2855]: I0130 15:50:35.593060 2855 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 15:50:35.597404 kubelet[2855]: I0130 15:50:35.593108 2855 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:50:35.597404 kubelet[2855]: I0130 15:50:35.593557 2855 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 15:50:35.601178 kubelet[2855]: I0130 15:50:35.600941 2855 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 15:50:35.602544 kubelet[2855]: I0130 15:50:35.602509 2855 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 15:50:35.610171 kubelet[2855]: I0130 15:50:35.610121 2855 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 15:50:35.610581 kubelet[2855]: I0130 15:50:35.610553 2855 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:50:35.610749 kubelet[2855]: I0130 15:50:35.610582 2855 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-c-6e27ecb2ae.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 15:50:35.610840 kubelet[2855]: I0130 15:50:35.610763 2855 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:50:35.610840 kubelet[2855]: I0130 15:50:35.610775 2855 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 15:50:35.610840 kubelet[2855]: I0130 15:50:35.610814 2855 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:50:35.611361 kubelet[2855]: I0130 15:50:35.610942 2855 kubelet.go:400] "Attempting to sync node with API server" Jan 30 15:50:35.611361 kubelet[2855]: I0130 15:50:35.610961 2855 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:50:35.611361 kubelet[2855]: I0130 15:50:35.611296 2855 kubelet.go:312] "Adding apiserver pod source" Jan 30 15:50:35.611361 kubelet[2855]: I0130 15:50:35.611314 2855 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:50:35.614527 kubelet[2855]: I0130 15:50:35.613240 2855 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 15:50:35.614527 kubelet[2855]: I0130 15:50:35.613420 2855 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:50:35.614527 kubelet[2855]: I0130 15:50:35.614282 2855 server.go:1264] "Started kubelet" Jan 30 15:50:35.618108 kubelet[2855]: I0130 15:50:35.618089 2855 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:50:35.626598 kubelet[2855]: I0130 15:50:35.626563 2855 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:50:35.627764 kubelet[2855]: I0130 15:50:35.627422 2855 server.go:455] "Adding debug handlers to kubelet server" Jan 30 15:50:35.635254 kubelet[2855]: I0130 15:50:35.635067 2855 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:50:35.635327 kubelet[2855]: I0130 15:50:35.635272 2855 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:50:35.636918 kubelet[2855]: I0130 15:50:35.636897 2855 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 15:50:35.640610 kubelet[2855]: I0130 15:50:35.640582 2855 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 15:50:35.640729 kubelet[2855]: I0130 15:50:35.640704 2855 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:50:35.644335 kubelet[2855]: I0130 15:50:35.644114 2855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:50:35.646588 kubelet[2855]: I0130 15:50:35.646561 2855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:50:35.646645 kubelet[2855]: I0130 15:50:35.646592 2855 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:50:35.646645 kubelet[2855]: I0130 15:50:35.646607 2855 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 15:50:35.646695 kubelet[2855]: E0130 15:50:35.646649 2855 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:50:35.657418 kubelet[2855]: I0130 15:50:35.656287 2855 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:50:35.657418 kubelet[2855]: I0130 15:50:35.656383 2855 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 15:50:35.658236 kubelet[2855]: E0130 15:50:35.658161 2855 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 15:50:35.659222 kubelet[2855]: I0130 15:50:35.659194 2855 factory.go:221] Registration of the containerd container factory successfully Jan 30 15:50:35.715097 kubelet[2855]: I0130 15:50:35.714982 2855 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 15:50:35.715097 kubelet[2855]: I0130 15:50:35.715000 2855 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 15:50:35.715097 kubelet[2855]: I0130 15:50:35.715016 2855 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:50:35.715449 kubelet[2855]: I0130 15:50:35.715362 2855 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 15:50:35.715449 kubelet[2855]: I0130 15:50:35.715376 2855 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 15:50:35.715449 kubelet[2855]: I0130 15:50:35.715395 2855 policy_none.go:49] "None policy: Start" Jan 30 15:50:35.716536 kubelet[2855]: I0130 15:50:35.716026 2855 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:50:35.716536 kubelet[2855]: I0130 15:50:35.716044 2855 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:50:35.716536 kubelet[2855]: I0130 15:50:35.716200 2855 state_mem.go:75] "Updated machine memory state" Jan 30 15:50:35.718229 kubelet[2855]: I0130 15:50:35.717222 2855 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:50:35.718229 kubelet[2855]: I0130 15:50:35.717370 2855 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:50:35.718229 kubelet[2855]: I0130 15:50:35.717449 2855 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:50:35.740646 kubelet[2855]: I0130 15:50:35.739789 2855 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.748337 kubelet[2855]: I0130 15:50:35.747314 2855 topology_manager.go:215] "Topology Admit Handler" podUID="e1f6672c24080b67ffdc13398623a178" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.748337 kubelet[2855]: I0130 15:50:35.747427 2855 topology_manager.go:215] "Topology Admit Handler" podUID="2a2ba3fab3fb2754b493591f237716fe" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.748337 kubelet[2855]: I0130 15:50:35.747566 2855 topology_manager.go:215] "Topology Admit Handler" podUID="5ff52e75ad7260ce851b3bc3686e588e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.771671 kubelet[2855]: I0130 15:50:35.771164 2855 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.771671 kubelet[2855]: I0130 15:50:35.771231 2855 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.785333 sudo[2885]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 15:50:35.785670 sudo[2885]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 15:50:35.786316 kubelet[2855]: W0130 15:50:35.786123 2855 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:50:35.787715 kubelet[2855]: W0130 15:50:35.787651 2855 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:50:35.787715 kubelet[2855]: W0130 15:50:35.787670 2855 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:50:35.842517 kubelet[2855]: I0130 15:50:35.842469 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.842517 kubelet[2855]: I0130 15:50:35.842521 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a2ba3fab3fb2754b493591f237716fe-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"2a2ba3fab3fb2754b493591f237716fe\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.842673 kubelet[2855]: I0130 15:50:35.842547 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a2ba3fab3fb2754b493591f237716fe-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"2a2ba3fab3fb2754b493591f237716fe\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.842673 kubelet[2855]: I0130 15:50:35.842567 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.842673 kubelet[2855]: I0130 15:50:35.842587 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.842673 kubelet[2855]: I0130 15:50:35.842604 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1f6672c24080b67ffdc13398623a178-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"e1f6672c24080b67ffdc13398623a178\") " pod="kube-system/kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.842673 kubelet[2855]: I0130 15:50:35.842621 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a2ba3fab3fb2754b493591f237716fe-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"2a2ba3fab3fb2754b493591f237716fe\") " pod="kube-system/kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.842805 kubelet[2855]: I0130 15:50:35.842640 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:35.842805 kubelet[2855]: I0130 15:50:35.842658 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ff52e75ad7260ce851b3bc3686e588e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal\" (UID: \"5ff52e75ad7260ce851b3bc3686e588e\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:36.327083 sudo[2885]: pam_unix(sudo:session): session closed for user root Jan 30 15:50:36.612820 kubelet[2855]: I0130 15:50:36.612720 2855 apiserver.go:52] "Watching apiserver" Jan 30 15:50:36.641127 kubelet[2855]: I0130 15:50:36.641088 2855 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 15:50:36.702762 kubelet[2855]: W0130 15:50:36.702717 2855 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:50:36.702889 kubelet[2855]: E0130 15:50:36.702862 2855 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:36.706505 kubelet[2855]: W0130 15:50:36.704868 2855 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 15:50:36.706505 kubelet[2855]: E0130 15:50:36.705270 2855 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal" Jan 30 15:50:36.807644 kubelet[2855]: I0130 15:50:36.807582 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-c-6e27ecb2ae.novalocal" podStartSLOduration=1.807564272 podStartE2EDuration="1.807564272s" podCreationTimestamp="2025-01-30 15:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:50:36.805380669 +0000 UTC m=+1.297729663" watchObservedRunningTime="2025-01-30 15:50:36.807564272 +0000 UTC m=+1.299913256" Jan 30 15:50:36.807832 kubelet[2855]: I0130 15:50:36.807686 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-c-6e27ecb2ae.novalocal" podStartSLOduration=1.8076804 podStartE2EDuration="1.8076804s" podCreationTimestamp="2025-01-30 15:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:50:36.787034202 +0000 UTC m=+1.279383176" watchObservedRunningTime="2025-01-30 15:50:36.8076804 +0000 UTC m=+1.300029374" Jan 30 15:50:38.730750 sudo[1880]: pam_unix(sudo:session): session closed for user root Jan 30 15:50:39.009109 sshd[1874]: pam_unix(sshd:session): session closed for user core Jan 30 15:50:39.015937 systemd[1]: sshd@8-172.24.4.96:22-172.24.4.1:58858.service: Deactivated successfully. Jan 30 15:50:39.023849 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 15:50:39.026491 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Jan 30 15:50:39.030192 systemd-logind[1561]: Removed session 11. Jan 30 15:50:41.194402 kubelet[2855]: I0130 15:50:41.194175 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-c-6e27ecb2ae.novalocal" podStartSLOduration=6.194102452 podStartE2EDuration="6.194102452s" podCreationTimestamp="2025-01-30 15:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:50:36.829890408 +0000 UTC m=+1.322239402" watchObservedRunningTime="2025-01-30 15:50:41.194102452 +0000 UTC m=+5.686451466" Jan 30 15:50:50.115030 kubelet[2855]: I0130 15:50:50.114859 2855 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 15:50:50.117735 kubelet[2855]: I0130 15:50:50.116940 2855 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 15:50:50.117978 containerd[1577]: time="2025-01-30T15:50:50.115812963Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 15:50:50.808326 kubelet[2855]: I0130 15:50:50.804913 2855 topology_manager.go:215] "Topology Admit Handler" podUID="62da5769-a556-4d65-ac40-a32e671ed2e5" podNamespace="kube-system" podName="cilium-kwp9p" Jan 30 15:50:50.808326 kubelet[2855]: I0130 15:50:50.805185 2855 topology_manager.go:215] "Topology Admit Handler" podUID="7b35cadb-f181-41d5-ad45-03e3f3fbb0ed" podNamespace="kube-system" podName="kube-proxy-ft5jg" Jan 30 15:50:50.835259 kubelet[2855]: I0130 15:50:50.835150 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-run\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.835597 kubelet[2855]: I0130 15:50:50.835553 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-cgroup\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.835653 kubelet[2855]: I0130 15:50:50.835615 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b35cadb-f181-41d5-ad45-03e3f3fbb0ed-kube-proxy\") pod \"kube-proxy-ft5jg\" (UID: \"7b35cadb-f181-41d5-ad45-03e3f3fbb0ed\") " pod="kube-system/kube-proxy-ft5jg" Jan 30 15:50:50.835653 kubelet[2855]: I0130 15:50:50.835640 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b35cadb-f181-41d5-ad45-03e3f3fbb0ed-lib-modules\") pod \"kube-proxy-ft5jg\" (UID: \"7b35cadb-f181-41d5-ad45-03e3f3fbb0ed\") " pod="kube-system/kube-proxy-ft5jg" Jan 30 15:50:50.835714 kubelet[2855]: I0130 15:50:50.835659 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cni-path\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.835714 kubelet[2855]: I0130 15:50:50.835694 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62da5769-a556-4d65-ac40-a32e671ed2e5-clustermesh-secrets\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.835764 kubelet[2855]: I0130 15:50:50.835717 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-host-proc-sys-kernel\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.835764 kubelet[2855]: I0130 15:50:50.835735 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-xtables-lock\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.836049 kubelet[2855]: I0130 15:50:50.835934 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-config-path\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.836553 kubelet[2855]: I0130 15:50:50.836434 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62da5769-a556-4d65-ac40-a32e671ed2e5-hubble-tls\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.836553 kubelet[2855]: I0130 15:50:50.836478 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-host-proc-sys-net\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.836553 kubelet[2855]: I0130 15:50:50.836523 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9lj\" (UniqueName: \"kubernetes.io/projected/62da5769-a556-4d65-ac40-a32e671ed2e5-kube-api-access-dw9lj\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.836553 kubelet[2855]: I0130 15:50:50.836545 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b35cadb-f181-41d5-ad45-03e3f3fbb0ed-xtables-lock\") pod \"kube-proxy-ft5jg\" (UID: \"7b35cadb-f181-41d5-ad45-03e3f3fbb0ed\") " pod="kube-system/kube-proxy-ft5jg" Jan 30 15:50:50.836969 kubelet[2855]: I0130 15:50:50.836563 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9rq\" (UniqueName: \"kubernetes.io/projected/7b35cadb-f181-41d5-ad45-03e3f3fbb0ed-kube-api-access-pp9rq\") pod \"kube-proxy-ft5jg\" (UID: \"7b35cadb-f181-41d5-ad45-03e3f3fbb0ed\") " pod="kube-system/kube-proxy-ft5jg" Jan 30 15:50:50.836969 kubelet[2855]: I0130 15:50:50.836603 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-hostproc\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.836969 kubelet[2855]: I0130 15:50:50.836624 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-etc-cni-netd\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.837080 kubelet[2855]: I0130 15:50:50.836971 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-lib-modules\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:50.837080 kubelet[2855]: I0130 15:50:50.837014 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-bpf-maps\") pod \"cilium-kwp9p\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " pod="kube-system/cilium-kwp9p" Jan 30 15:50:51.058016 kubelet[2855]: I0130 15:50:51.055148 2855 topology_manager.go:215] "Topology Admit Handler" podUID="395991b0-1e39-45bf-9a19-60ae6325572d" podNamespace="kube-system" podName="cilium-operator-599987898-5tztk" Jan 30 15:50:51.115620 containerd[1577]: time="2025-01-30T15:50:51.115516362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ft5jg,Uid:7b35cadb-f181-41d5-ad45-03e3f3fbb0ed,Namespace:kube-system,Attempt:0,}" Jan 30 15:50:51.120787 containerd[1577]: time="2025-01-30T15:50:51.120740968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwp9p,Uid:62da5769-a556-4d65-ac40-a32e671ed2e5,Namespace:kube-system,Attempt:0,}" Jan 30 15:50:51.139989 kubelet[2855]: I0130 15:50:51.139929 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/395991b0-1e39-45bf-9a19-60ae6325572d-cilium-config-path\") pod \"cilium-operator-599987898-5tztk\" (UID: \"395991b0-1e39-45bf-9a19-60ae6325572d\") " pod="kube-system/cilium-operator-599987898-5tztk" Jan 30 15:50:51.140310 kubelet[2855]: I0130 15:50:51.139995 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsm7\" (UniqueName: \"kubernetes.io/projected/395991b0-1e39-45bf-9a19-60ae6325572d-kube-api-access-fhsm7\") pod \"cilium-operator-599987898-5tztk\" (UID: \"395991b0-1e39-45bf-9a19-60ae6325572d\") " pod="kube-system/cilium-operator-599987898-5tztk" Jan 30 15:50:51.331055 containerd[1577]: time="2025-01-30T15:50:51.330922429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:50:51.331055 containerd[1577]: time="2025-01-30T15:50:51.331031894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:50:51.331055 containerd[1577]: time="2025-01-30T15:50:51.331063985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:51.331723 containerd[1577]: time="2025-01-30T15:50:51.331222322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:51.370841 containerd[1577]: time="2025-01-30T15:50:51.370740149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-5tztk,Uid:395991b0-1e39-45bf-9a19-60ae6325572d,Namespace:kube-system,Attempt:0,}" Jan 30 15:50:51.374875 containerd[1577]: time="2025-01-30T15:50:51.374769745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ft5jg,Uid:7b35cadb-f181-41d5-ad45-03e3f3fbb0ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"d760cd22b787db1cef82b24391ec0f4f92d5c0108527272a98f478336b524389\"" Jan 30 15:50:51.379062 containerd[1577]: time="2025-01-30T15:50:51.379033520Z" level=info msg="CreateContainer within sandbox \"d760cd22b787db1cef82b24391ec0f4f92d5c0108527272a98f478336b524389\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 15:50:51.432758 containerd[1577]: time="2025-01-30T15:50:51.430659822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:50:51.432758 containerd[1577]: time="2025-01-30T15:50:51.431529172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:50:51.432758 containerd[1577]: time="2025-01-30T15:50:51.431550352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:51.432758 containerd[1577]: time="2025-01-30T15:50:51.431641833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:51.453049 containerd[1577]: time="2025-01-30T15:50:51.452989731Z" level=info msg="CreateContainer within sandbox \"d760cd22b787db1cef82b24391ec0f4f92d5c0108527272a98f478336b524389\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6d44b4a77eeb671223003fd38f068fb79807721fd8c083d477176f056420816\"" Jan 30 15:50:51.454765 containerd[1577]: time="2025-01-30T15:50:51.454693316Z" level=info msg="StartContainer for \"b6d44b4a77eeb671223003fd38f068fb79807721fd8c083d477176f056420816\"" Jan 30 15:50:51.473120 containerd[1577]: time="2025-01-30T15:50:51.472904974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:50:51.473120 containerd[1577]: time="2025-01-30T15:50:51.472966118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:50:51.473120 containerd[1577]: time="2025-01-30T15:50:51.472980425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:51.473120 containerd[1577]: time="2025-01-30T15:50:51.473061738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:50:51.489929 containerd[1577]: time="2025-01-30T15:50:51.489767932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwp9p,Uid:62da5769-a556-4d65-ac40-a32e671ed2e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\"" Jan 30 15:50:51.493488 containerd[1577]: time="2025-01-30T15:50:51.493447091Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 15:50:51.559022 containerd[1577]: time="2025-01-30T15:50:51.558886874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-5tztk,Uid:395991b0-1e39-45bf-9a19-60ae6325572d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\"" Jan 30 15:50:51.559451 containerd[1577]: time="2025-01-30T15:50:51.559425293Z" level=info msg="StartContainer for \"b6d44b4a77eeb671223003fd38f068fb79807721fd8c083d477176f056420816\" returns successfully" Jan 30 15:50:51.752616 kubelet[2855]: I0130 15:50:51.752292 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ft5jg" podStartSLOduration=1.752255591 podStartE2EDuration="1.752255591s" podCreationTimestamp="2025-01-30 15:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:50:51.752032462 +0000 UTC m=+16.244381426" watchObservedRunningTime="2025-01-30 15:50:51.752255591 +0000 UTC m=+16.244604565" Jan 30 15:50:57.959471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720765734.mount: Deactivated successfully. Jan 30 15:51:01.263588 containerd[1577]: time="2025-01-30T15:51:01.261627861Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:51:01.265704 containerd[1577]: time="2025-01-30T15:51:01.265568240Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 15:51:01.268979 containerd[1577]: time="2025-01-30T15:51:01.268887835Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:51:01.280015 containerd[1577]: time="2025-01-30T15:51:01.279928780Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.786418851s" Jan 30 15:51:01.280151 containerd[1577]: time="2025-01-30T15:51:01.280012107Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 15:51:01.282778 containerd[1577]: time="2025-01-30T15:51:01.282732718Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 15:51:01.292766 containerd[1577]: time="2025-01-30T15:51:01.292697936Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:51:01.342419 containerd[1577]: time="2025-01-30T15:51:01.342059532Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\"" Jan 30 15:51:01.343832 containerd[1577]: time="2025-01-30T15:51:01.343772785Z" level=info msg="StartContainer for \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\"" Jan 30 15:51:01.442386 containerd[1577]: time="2025-01-30T15:51:01.442050832Z" level=info msg="StartContainer for \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\" returns successfully" Jan 30 15:51:02.314941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85-rootfs.mount: Deactivated successfully. Jan 30 15:51:02.794489 containerd[1577]: time="2025-01-30T15:51:02.793629251Z" level=info msg="shim disconnected" id=6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85 namespace=k8s.io Jan 30 15:51:02.794489 containerd[1577]: time="2025-01-30T15:51:02.793683292Z" level=warning msg="cleaning up after shim disconnected" id=6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85 namespace=k8s.io Jan 30 15:51:02.794489 containerd[1577]: time="2025-01-30T15:51:02.793698731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:51:03.775672 containerd[1577]: time="2025-01-30T15:51:03.775607746Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:51:03.812808 containerd[1577]: time="2025-01-30T15:51:03.812720724Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\"" Jan 30 15:51:03.813638 containerd[1577]: time="2025-01-30T15:51:03.813263782Z" level=info msg="StartContainer for \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\"" Jan 30 15:51:03.857932 systemd[1]: run-containerd-runc-k8s.io-ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166-runc.lIvJmn.mount: Deactivated successfully. Jan 30 15:51:03.890292 containerd[1577]: time="2025-01-30T15:51:03.890259741Z" level=info msg="StartContainer for \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\" returns successfully" Jan 30 15:51:03.893612 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 15:51:03.894622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:51:03.894685 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:51:03.903753 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 15:51:03.920616 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 15:51:03.941295 containerd[1577]: time="2025-01-30T15:51:03.941237079Z" level=info msg="shim disconnected" id=ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166 namespace=k8s.io Jan 30 15:51:03.941295 containerd[1577]: time="2025-01-30T15:51:03.941287394Z" level=warning msg="cleaning up after shim disconnected" id=ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166 namespace=k8s.io Jan 30 15:51:03.941295 containerd[1577]: time="2025-01-30T15:51:03.941298144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:51:04.785885 containerd[1577]: time="2025-01-30T15:51:04.785617270Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:51:04.798465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166-rootfs.mount: Deactivated successfully. Jan 30 15:51:04.829130 containerd[1577]: time="2025-01-30T15:51:04.829000477Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\"" Jan 30 15:51:04.831235 containerd[1577]: time="2025-01-30T15:51:04.829903050Z" level=info msg="StartContainer for \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\"" Jan 30 15:51:04.917719 containerd[1577]: time="2025-01-30T15:51:04.917686100Z" level=info msg="StartContainer for \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\" returns successfully" Jan 30 15:51:04.963411 containerd[1577]: time="2025-01-30T15:51:04.963353189Z" level=info msg="shim disconnected" id=cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4 namespace=k8s.io Jan 30 15:51:04.963411 containerd[1577]: time="2025-01-30T15:51:04.963402672Z" level=warning msg="cleaning up after shim disconnected" id=cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4 namespace=k8s.io Jan 30 15:51:04.963411 containerd[1577]: time="2025-01-30T15:51:04.963413933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:51:05.762597 containerd[1577]: time="2025-01-30T15:51:05.762530120Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:51:05.763771 containerd[1577]: time="2025-01-30T15:51:05.763605086Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 15:51:05.764992 containerd[1577]: time="2025-01-30T15:51:05.764938305Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 15:51:05.766657 containerd[1577]: time="2025-01-30T15:51:05.766455330Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.483519402s" Jan 30 15:51:05.766657 containerd[1577]: time="2025-01-30T15:51:05.766488352Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 15:51:05.768760 containerd[1577]: time="2025-01-30T15:51:05.768736419Z" level=info msg="CreateContainer within sandbox \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 15:51:05.792982 containerd[1577]: time="2025-01-30T15:51:05.792837004Z" level=info msg="CreateContainer within sandbox \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\"" Jan 30 15:51:05.798293 containerd[1577]: time="2025-01-30T15:51:05.797083990Z" level=info msg="StartContainer for \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\"" Jan 30 15:51:05.798393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4-rootfs.mount: Deactivated successfully. Jan 30 15:51:05.804301 containerd[1577]: time="2025-01-30T15:51:05.804269815Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:51:05.841440 containerd[1577]: time="2025-01-30T15:51:05.841318634Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\"" Jan 30 15:51:05.844750 containerd[1577]: time="2025-01-30T15:51:05.843065158Z" level=info msg="StartContainer for \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\"" Jan 30 15:51:05.853360 systemd[1]: run-containerd-runc-k8s.io-9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62-runc.SXME44.mount: Deactivated successfully. Jan 30 15:51:05.890533 containerd[1577]: time="2025-01-30T15:51:05.889784903Z" level=info msg="StartContainer for \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\" returns successfully" Jan 30 15:51:05.933174 containerd[1577]: time="2025-01-30T15:51:05.933146390Z" level=info msg="StartContainer for \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\" returns successfully" Jan 30 15:51:06.307269 containerd[1577]: time="2025-01-30T15:51:06.306884921Z" level=info msg="shim disconnected" id=1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea namespace=k8s.io Jan 30 15:51:06.309018 containerd[1577]: time="2025-01-30T15:51:06.307114091Z" level=warning msg="cleaning up after shim disconnected" id=1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea namespace=k8s.io Jan 30 15:51:06.309018 containerd[1577]: time="2025-01-30T15:51:06.308597273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:51:06.356746 containerd[1577]: time="2025-01-30T15:51:06.355868281Z" level=warning msg="cleanup warnings time=\"2025-01-30T15:51:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 15:51:06.836831 containerd[1577]: time="2025-01-30T15:51:06.836645798Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:51:06.893526 containerd[1577]: time="2025-01-30T15:51:06.891769133Z" level=info msg="CreateContainer within sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\"" Jan 30 15:51:06.896517 containerd[1577]: time="2025-01-30T15:51:06.894090115Z" level=info msg="StartContainer for \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\"" Jan 30 15:51:06.958861 kubelet[2855]: I0130 15:51:06.958692 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-5tztk" podStartSLOduration=2.753517155 podStartE2EDuration="16.958674251s" podCreationTimestamp="2025-01-30 15:50:50 +0000 UTC" firstStartedPulling="2025-01-30 15:50:51.562221196 +0000 UTC m=+16.054570170" lastFinishedPulling="2025-01-30 15:51:05.767378301 +0000 UTC m=+30.259727266" observedRunningTime="2025-01-30 15:51:06.881686694 +0000 UTC m=+31.374035698" watchObservedRunningTime="2025-01-30 15:51:06.958674251 +0000 UTC m=+31.451023235" Jan 30 15:51:07.008349 containerd[1577]: time="2025-01-30T15:51:07.008277634Z" level=info msg="StartContainer for \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\" returns successfully" Jan 30 15:51:07.127698 kubelet[2855]: I0130 15:51:07.127569 2855 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 15:51:07.167638 kubelet[2855]: I0130 15:51:07.167583 2855 topology_manager.go:215] "Topology Admit Handler" podUID="caa11621-c275-4c56-a350-a1180b1ce118" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mvkhr" Jan 30 15:51:07.170573 kubelet[2855]: I0130 15:51:07.170170 2855 topology_manager.go:215] "Topology Admit Handler" podUID="5e6583c5-2c19-4e46-8a01-093bbcd61d62" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ckcc7" Jan 30 15:51:07.272402 kubelet[2855]: I0130 15:51:07.272366 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzngm\" (UniqueName: \"kubernetes.io/projected/caa11621-c275-4c56-a350-a1180b1ce118-kube-api-access-bzngm\") pod \"coredns-7db6d8ff4d-mvkhr\" (UID: \"caa11621-c275-4c56-a350-a1180b1ce118\") " pod="kube-system/coredns-7db6d8ff4d-mvkhr" Jan 30 15:51:07.272402 kubelet[2855]: I0130 15:51:07.272408 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e6583c5-2c19-4e46-8a01-093bbcd61d62-config-volume\") pod \"coredns-7db6d8ff4d-ckcc7\" (UID: \"5e6583c5-2c19-4e46-8a01-093bbcd61d62\") " pod="kube-system/coredns-7db6d8ff4d-ckcc7" Jan 30 15:51:07.273066 kubelet[2855]: I0130 15:51:07.272430 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nlnt\" (UniqueName: \"kubernetes.io/projected/5e6583c5-2c19-4e46-8a01-093bbcd61d62-kube-api-access-5nlnt\") pod \"coredns-7db6d8ff4d-ckcc7\" (UID: \"5e6583c5-2c19-4e46-8a01-093bbcd61d62\") " pod="kube-system/coredns-7db6d8ff4d-ckcc7" Jan 30 15:51:07.273066 kubelet[2855]: I0130 15:51:07.272457 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caa11621-c275-4c56-a350-a1180b1ce118-config-volume\") pod \"coredns-7db6d8ff4d-mvkhr\" (UID: \"caa11621-c275-4c56-a350-a1180b1ce118\") " pod="kube-system/coredns-7db6d8ff4d-mvkhr" Jan 30 15:51:07.477138 containerd[1577]: time="2025-01-30T15:51:07.476987485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mvkhr,Uid:caa11621-c275-4c56-a350-a1180b1ce118,Namespace:kube-system,Attempt:0,}" Jan 30 15:51:07.482227 containerd[1577]: time="2025-01-30T15:51:07.482183008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckcc7,Uid:5e6583c5-2c19-4e46-8a01-093bbcd61d62,Namespace:kube-system,Attempt:0,}" Jan 30 15:51:07.849361 kubelet[2855]: I0130 15:51:07.848617 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kwp9p" podStartSLOduration=8.057951212 podStartE2EDuration="17.848579927s" podCreationTimestamp="2025-01-30 15:50:50 +0000 UTC" firstStartedPulling="2025-01-30 15:50:51.491025511 +0000 UTC m=+15.983374475" lastFinishedPulling="2025-01-30 15:51:01.281654176 +0000 UTC m=+25.774003190" observedRunningTime="2025-01-30 15:51:07.84815502 +0000 UTC m=+32.340503994" watchObservedRunningTime="2025-01-30 15:51:07.848579927 +0000 UTC m=+32.340928911" Jan 30 15:51:09.205130 systemd-networkd[1214]: cilium_host: Link UP Jan 30 15:51:09.205622 systemd-networkd[1214]: cilium_net: Link UP Jan 30 15:51:09.207771 systemd-networkd[1214]: cilium_net: Gained carrier Jan 30 15:51:09.208178 systemd-networkd[1214]: cilium_host: Gained carrier Jan 30 15:51:09.308370 systemd-networkd[1214]: cilium_vxlan: Link UP Jan 30 15:51:09.308378 systemd-networkd[1214]: cilium_vxlan: Gained carrier Jan 30 15:51:09.628541 kernel: NET: Registered PF_ALG protocol family Jan 30 15:51:09.651602 systemd-networkd[1214]: cilium_host: Gained IPv6LL Jan 30 15:51:10.107702 systemd-networkd[1214]: cilium_net: Gained IPv6LL Jan 30 15:51:10.399020 systemd-networkd[1214]: lxc_health: Link UP Jan 30 15:51:10.405408 systemd-networkd[1214]: lxc_health: Gained carrier Jan 30 15:51:10.492660 systemd-networkd[1214]: cilium_vxlan: Gained IPv6LL Jan 30 15:51:10.551350 systemd-networkd[1214]: lxce7dd29715be1: Link UP Jan 30 15:51:10.558631 kernel: eth0: renamed from tmpb253e Jan 30 15:51:10.567360 systemd-networkd[1214]: lxce7dd29715be1: Gained carrier Jan 30 15:51:10.570563 kernel: eth0: renamed from tmp38fb1 Jan 30 15:51:10.569106 systemd-networkd[1214]: lxc0780b95d8f70: Link UP Jan 30 15:51:10.579447 systemd-networkd[1214]: lxc0780b95d8f70: Gained carrier Jan 30 15:51:12.028436 systemd-networkd[1214]: lxc0780b95d8f70: Gained IPv6LL Jan 30 15:51:12.092659 systemd-networkd[1214]: lxc_health: Gained IPv6LL Jan 30 15:51:12.603677 systemd-networkd[1214]: lxce7dd29715be1: Gained IPv6LL Jan 30 15:51:15.061373 containerd[1577]: time="2025-01-30T15:51:15.060556534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:51:15.061373 containerd[1577]: time="2025-01-30T15:51:15.060718850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:51:15.061373 containerd[1577]: time="2025-01-30T15:51:15.060788541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:51:15.065037 containerd[1577]: time="2025-01-30T15:51:15.062790857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:51:15.129301 containerd[1577]: time="2025-01-30T15:51:15.127604725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:51:15.129301 containerd[1577]: time="2025-01-30T15:51:15.128990952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:51:15.129301 containerd[1577]: time="2025-01-30T15:51:15.129007063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:51:15.129301 containerd[1577]: time="2025-01-30T15:51:15.129095028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:51:15.159974 containerd[1577]: time="2025-01-30T15:51:15.159810675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckcc7,Uid:5e6583c5-2c19-4e46-8a01-093bbcd61d62,Namespace:kube-system,Attempt:0,} returns sandbox id \"38fb19a535bef5c2434f563d06f4bdecb1ef94965a565a0dda977bfcab8f1415\"" Jan 30 15:51:15.165933 containerd[1577]: time="2025-01-30T15:51:15.165880012Z" level=info msg="CreateContainer within sandbox \"38fb19a535bef5c2434f563d06f4bdecb1ef94965a565a0dda977bfcab8f1415\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:51:15.211887 containerd[1577]: time="2025-01-30T15:51:15.211846998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mvkhr,Uid:caa11621-c275-4c56-a350-a1180b1ce118,Namespace:kube-system,Attempt:0,} returns sandbox id \"b253eeb0222ab9934aea4cc831b7fa1894d16fa148b5427af4ddfe2ae5113976\"" Jan 30 15:51:15.216785 containerd[1577]: time="2025-01-30T15:51:15.216733630Z" level=info msg="CreateContainer within sandbox \"b253eeb0222ab9934aea4cc831b7fa1894d16fa148b5427af4ddfe2ae5113976\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 15:51:15.229879 containerd[1577]: time="2025-01-30T15:51:15.229825106Z" level=info msg="CreateContainer within sandbox \"38fb19a535bef5c2434f563d06f4bdecb1ef94965a565a0dda977bfcab8f1415\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ca20ae8b8f938f46778e8afd1ae8e69c189e0f971aa4a9650b7ffea6d1ddef3\"" Jan 30 15:51:15.231622 containerd[1577]: time="2025-01-30T15:51:15.231582281Z" level=info msg="StartContainer for \"8ca20ae8b8f938f46778e8afd1ae8e69c189e0f971aa4a9650b7ffea6d1ddef3\"" Jan 30 15:51:15.256722 containerd[1577]: time="2025-01-30T15:51:15.256677276Z" level=info msg="CreateContainer within sandbox \"b253eeb0222ab9934aea4cc831b7fa1894d16fa148b5427af4ddfe2ae5113976\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bb1c11965bd06c2f6c4b0eed27b74bbaa8e4ca64f98fb1c714126858e4646f9\"" Jan 30 15:51:15.261425 containerd[1577]: time="2025-01-30T15:51:15.261389950Z" level=info msg="StartContainer for \"2bb1c11965bd06c2f6c4b0eed27b74bbaa8e4ca64f98fb1c714126858e4646f9\"" Jan 30 15:51:15.320129 containerd[1577]: time="2025-01-30T15:51:15.319270396Z" level=info msg="StartContainer for \"8ca20ae8b8f938f46778e8afd1ae8e69c189e0f971aa4a9650b7ffea6d1ddef3\" returns successfully" Jan 30 15:51:15.346565 containerd[1577]: time="2025-01-30T15:51:15.345834805Z" level=info msg="StartContainer for \"2bb1c11965bd06c2f6c4b0eed27b74bbaa8e4ca64f98fb1c714126858e4646f9\" returns successfully" Jan 30 15:51:15.876336 kubelet[2855]: I0130 15:51:15.875843 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ckcc7" podStartSLOduration=24.875810622 podStartE2EDuration="24.875810622s" podCreationTimestamp="2025-01-30 15:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:51:15.868079559 +0000 UTC m=+40.360428553" watchObservedRunningTime="2025-01-30 15:51:15.875810622 +0000 UTC m=+40.368159656" Jan 30 15:51:15.901444 kubelet[2855]: I0130 15:51:15.897602 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mvkhr" podStartSLOduration=25.897564542 podStartE2EDuration="25.897564542s" podCreationTimestamp="2025-01-30 15:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:51:15.897416022 +0000 UTC m=+40.389765037" watchObservedRunningTime="2025-01-30 15:51:15.897564542 +0000 UTC m=+40.389913556" Jan 30 15:52:03.878208 systemd[1]: Started sshd@9-172.24.4.96:22-172.24.4.1:53914.service - OpenSSH per-connection server daemon (172.24.4.1:53914). Jan 30 15:52:05.245377 sshd[4222]: Accepted publickey for core from 172.24.4.1 port 53914 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:05.249164 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:05.264786 systemd-logind[1561]: New session 12 of user core. Jan 30 15:52:05.272830 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 15:52:06.079136 sshd[4222]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:06.091120 systemd[1]: sshd@9-172.24.4.96:22-172.24.4.1:53914.service: Deactivated successfully. Jan 30 15:52:06.091214 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Jan 30 15:52:06.102238 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 15:52:06.108479 systemd-logind[1561]: Removed session 12. Jan 30 15:52:11.092223 systemd[1]: Started sshd@10-172.24.4.96:22-172.24.4.1:53920.service - OpenSSH per-connection server daemon (172.24.4.1:53920). Jan 30 15:52:12.244383 sshd[4237]: Accepted publickey for core from 172.24.4.1 port 53920 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:12.246249 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:12.253554 systemd-logind[1561]: New session 13 of user core. Jan 30 15:52:12.260246 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 15:52:12.967059 sshd[4237]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:12.975643 systemd[1]: sshd@10-172.24.4.96:22-172.24.4.1:53920.service: Deactivated successfully. Jan 30 15:52:12.982932 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 15:52:12.985859 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Jan 30 15:52:12.988190 systemd-logind[1561]: Removed session 13. Jan 30 15:52:17.979142 systemd[1]: Started sshd@11-172.24.4.96:22-172.24.4.1:50236.service - OpenSSH per-connection server daemon (172.24.4.1:50236). Jan 30 15:52:19.342376 sshd[4252]: Accepted publickey for core from 172.24.4.1 port 50236 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:19.344815 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:19.353967 systemd-logind[1561]: New session 14 of user core. Jan 30 15:52:19.359430 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 15:52:20.051961 sshd[4252]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:20.063130 systemd[1]: Started sshd@12-172.24.4.96:22-172.24.4.1:50240.service - OpenSSH per-connection server daemon (172.24.4.1:50240). Jan 30 15:52:20.064625 systemd[1]: sshd@11-172.24.4.96:22-172.24.4.1:50236.service: Deactivated successfully. Jan 30 15:52:20.066979 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 15:52:20.069074 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Jan 30 15:52:20.075562 systemd-logind[1561]: Removed session 14. Jan 30 15:52:21.472898 sshd[4264]: Accepted publickey for core from 172.24.4.1 port 50240 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:21.476668 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:21.492220 systemd-logind[1561]: New session 15 of user core. Jan 30 15:52:21.499862 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 15:52:22.298868 sshd[4264]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:22.309616 systemd[1]: Started sshd@13-172.24.4.96:22-172.24.4.1:50248.service - OpenSSH per-connection server daemon (172.24.4.1:50248). Jan 30 15:52:22.313286 systemd[1]: sshd@12-172.24.4.96:22-172.24.4.1:50240.service: Deactivated successfully. Jan 30 15:52:22.322905 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 15:52:22.329208 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Jan 30 15:52:22.334604 systemd-logind[1561]: Removed session 15. Jan 30 15:52:23.752373 sshd[4278]: Accepted publickey for core from 172.24.4.1 port 50248 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:23.754554 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:23.763174 systemd-logind[1561]: New session 16 of user core. Jan 30 15:52:23.778092 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 15:52:24.506035 sshd[4278]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:24.513372 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Jan 30 15:52:24.513788 systemd[1]: sshd@13-172.24.4.96:22-172.24.4.1:50248.service: Deactivated successfully. Jan 30 15:52:24.527388 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 15:52:24.537036 systemd-logind[1561]: Removed session 16. Jan 30 15:52:29.515923 systemd[1]: Started sshd@14-172.24.4.96:22-172.24.4.1:33432.service - OpenSSH per-connection server daemon (172.24.4.1:33432). Jan 30 15:52:30.716750 sshd[4295]: Accepted publickey for core from 172.24.4.1 port 33432 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:30.719607 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:30.730612 systemd-logind[1561]: New session 17 of user core. Jan 30 15:52:30.740077 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 15:52:31.452007 sshd[4295]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:31.459146 systemd[1]: sshd@14-172.24.4.96:22-172.24.4.1:33432.service: Deactivated successfully. Jan 30 15:52:31.467004 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 15:52:31.469232 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Jan 30 15:52:31.472137 systemd-logind[1561]: Removed session 17. Jan 30 15:52:36.463607 systemd[1]: Started sshd@15-172.24.4.96:22-172.24.4.1:60648.service - OpenSSH per-connection server daemon (172.24.4.1:60648). Jan 30 15:52:38.023622 sshd[4311]: Accepted publickey for core from 172.24.4.1 port 60648 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:38.026430 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:38.037214 systemd-logind[1561]: New session 18 of user core. Jan 30 15:52:38.044076 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 15:52:38.664746 sshd[4311]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:38.676956 systemd[1]: Started sshd@16-172.24.4.96:22-172.24.4.1:60660.service - OpenSSH per-connection server daemon (172.24.4.1:60660). Jan 30 15:52:38.685713 systemd[1]: sshd@15-172.24.4.96:22-172.24.4.1:60648.service: Deactivated successfully. Jan 30 15:52:38.695111 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Jan 30 15:52:38.697104 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 15:52:38.700175 systemd-logind[1561]: Removed session 18. Jan 30 15:52:39.831726 sshd[4322]: Accepted publickey for core from 172.24.4.1 port 60660 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:39.834834 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:39.846751 systemd-logind[1561]: New session 19 of user core. Jan 30 15:52:39.854337 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 15:52:40.677644 sshd[4322]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:40.693742 systemd[1]: Started sshd@17-172.24.4.96:22-172.24.4.1:60672.service - OpenSSH per-connection server daemon (172.24.4.1:60672). Jan 30 15:52:40.696394 systemd[1]: sshd@16-172.24.4.96:22-172.24.4.1:60660.service: Deactivated successfully. Jan 30 15:52:40.702958 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 15:52:40.712715 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Jan 30 15:52:40.715206 systemd-logind[1561]: Removed session 19. Jan 30 15:52:41.915890 sshd[4335]: Accepted publickey for core from 172.24.4.1 port 60672 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:41.919399 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:41.930773 systemd-logind[1561]: New session 20 of user core. Jan 30 15:52:41.942026 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 15:52:44.691835 sshd[4335]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:44.713142 systemd[1]: Started sshd@18-172.24.4.96:22-172.24.4.1:43440.service - OpenSSH per-connection server daemon (172.24.4.1:43440). Jan 30 15:52:44.717233 systemd[1]: sshd@17-172.24.4.96:22-172.24.4.1:60672.service: Deactivated successfully. Jan 30 15:52:44.728406 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 15:52:44.734286 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Jan 30 15:52:44.737803 systemd-logind[1561]: Removed session 20. Jan 30 15:52:46.343019 sshd[4354]: Accepted publickey for core from 172.24.4.1 port 43440 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:46.345832 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:46.356690 systemd-logind[1561]: New session 21 of user core. Jan 30 15:52:46.362423 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 15:52:47.258729 sshd[4354]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:47.268596 systemd[1]: Started sshd@19-172.24.4.96:22-172.24.4.1:43444.service - OpenSSH per-connection server daemon (172.24.4.1:43444). Jan 30 15:52:47.271451 systemd[1]: sshd@18-172.24.4.96:22-172.24.4.1:43440.service: Deactivated successfully. Jan 30 15:52:47.279677 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 15:52:47.282131 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Jan 30 15:52:47.287038 systemd-logind[1561]: Removed session 21. Jan 30 15:52:48.338568 sshd[4365]: Accepted publickey for core from 172.24.4.1 port 43444 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:48.341092 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:48.351579 systemd-logind[1561]: New session 22 of user core. Jan 30 15:52:48.358141 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 15:52:49.034336 sshd[4365]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:49.039449 systemd[1]: sshd@19-172.24.4.96:22-172.24.4.1:43444.service: Deactivated successfully. Jan 30 15:52:49.042339 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Jan 30 15:52:49.043156 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 15:52:49.045691 systemd-logind[1561]: Removed session 22. Jan 30 15:52:54.044025 systemd[1]: Started sshd@20-172.24.4.96:22-172.24.4.1:41188.service - OpenSSH per-connection server daemon (172.24.4.1:41188). Jan 30 15:52:55.335406 sshd[4386]: Accepted publickey for core from 172.24.4.1 port 41188 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:52:55.337639 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:52:55.345651 systemd-logind[1561]: New session 23 of user core. Jan 30 15:52:55.347880 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 15:52:55.978028 sshd[4386]: pam_unix(sshd:session): session closed for user core Jan 30 15:52:55.985647 systemd[1]: sshd@20-172.24.4.96:22-172.24.4.1:41188.service: Deactivated successfully. Jan 30 15:52:55.993099 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 15:52:55.995417 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Jan 30 15:52:55.997845 systemd-logind[1561]: Removed session 23. Jan 30 15:53:00.995370 systemd[1]: Started sshd@21-172.24.4.96:22-172.24.4.1:41190.service - OpenSSH per-connection server daemon (172.24.4.1:41190). Jan 30 15:53:02.184768 sshd[4400]: Accepted publickey for core from 172.24.4.1 port 41190 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:53:02.187444 sshd[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:53:02.197625 systemd-logind[1561]: New session 24 of user core. Jan 30 15:53:02.204076 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 15:53:03.050936 sshd[4400]: pam_unix(sshd:session): session closed for user core Jan 30 15:53:03.058877 systemd[1]: sshd@21-172.24.4.96:22-172.24.4.1:41190.service: Deactivated successfully. Jan 30 15:53:03.065253 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 15:53:03.068177 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Jan 30 15:53:03.070604 systemd-logind[1561]: Removed session 24. Jan 30 15:53:08.063137 systemd[1]: Started sshd@22-172.24.4.96:22-172.24.4.1:60860.service - OpenSSH per-connection server daemon (172.24.4.1:60860). Jan 30 15:53:09.221112 sshd[4414]: Accepted publickey for core from 172.24.4.1 port 60860 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:53:09.224095 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:53:09.235870 systemd-logind[1561]: New session 25 of user core. Jan 30 15:53:09.242042 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 15:53:09.980054 sshd[4414]: pam_unix(sshd:session): session closed for user core Jan 30 15:53:09.993023 systemd[1]: Started sshd@23-172.24.4.96:22-172.24.4.1:60862.service - OpenSSH per-connection server daemon (172.24.4.1:60862). Jan 30 15:53:09.996981 systemd[1]: sshd@22-172.24.4.96:22-172.24.4.1:60860.service: Deactivated successfully. Jan 30 15:53:10.007937 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 15:53:10.013084 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Jan 30 15:53:10.015298 systemd-logind[1561]: Removed session 25. Jan 30 15:53:11.363041 sshd[4425]: Accepted publickey for core from 172.24.4.1 port 60862 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:53:11.365719 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:53:11.376137 systemd-logind[1561]: New session 26 of user core. Jan 30 15:53:11.382959 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 15:53:13.539571 containerd[1577]: time="2025-01-30T15:53:13.539391644Z" level=info msg="StopContainer for \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\" with timeout 30 (s)" Jan 30 15:53:13.541084 containerd[1577]: time="2025-01-30T15:53:13.540619208Z" level=info msg="Stop container \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\" with signal terminated" Jan 30 15:53:13.555601 systemd[1]: run-containerd-runc-k8s.io-9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec-runc.X8YBfq.mount: Deactivated successfully. Jan 30 15:53:13.567807 containerd[1577]: time="2025-01-30T15:53:13.567763530Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 15:53:13.576659 containerd[1577]: time="2025-01-30T15:53:13.576618204Z" level=info msg="StopContainer for \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\" with timeout 2 (s)" Jan 30 15:53:13.577439 containerd[1577]: time="2025-01-30T15:53:13.577420700Z" level=info msg="Stop container \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\" with signal terminated" Jan 30 15:53:13.587887 systemd-networkd[1214]: lxc_health: Link DOWN Jan 30 15:53:13.588196 systemd-networkd[1214]: lxc_health: Lost carrier Jan 30 15:53:13.612312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62-rootfs.mount: Deactivated successfully. Jan 30 15:53:13.636261 containerd[1577]: time="2025-01-30T15:53:13.635979234Z" level=info msg="shim disconnected" id=9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec namespace=k8s.io Jan 30 15:53:13.636261 containerd[1577]: time="2025-01-30T15:53:13.636076527Z" level=warning msg="cleaning up after shim disconnected" id=9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec namespace=k8s.io Jan 30 15:53:13.636261 containerd[1577]: time="2025-01-30T15:53:13.636115099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:53:13.637110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec-rootfs.mount: Deactivated successfully. Jan 30 15:53:13.638601 containerd[1577]: time="2025-01-30T15:53:13.637603141Z" level=info msg="shim disconnected" id=9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62 namespace=k8s.io Jan 30 15:53:13.638601 containerd[1577]: time="2025-01-30T15:53:13.637640922Z" level=warning msg="cleaning up after shim disconnected" id=9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62 namespace=k8s.io Jan 30 15:53:13.638601 containerd[1577]: time="2025-01-30T15:53:13.637649858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:53:13.682646 containerd[1577]: time="2025-01-30T15:53:13.682608836Z" level=info msg="StopContainer for \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\" returns successfully" Jan 30 15:53:13.682905 containerd[1577]: time="2025-01-30T15:53:13.682782552Z" level=info msg="StopContainer for \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\" returns successfully" Jan 30 15:53:13.683608 containerd[1577]: time="2025-01-30T15:53:13.683575500Z" level=info msg="StopPodSandbox for \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\"" Jan 30 15:53:13.683658 containerd[1577]: time="2025-01-30T15:53:13.683608371Z" level=info msg="Container to stop \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:53:13.684038 containerd[1577]: time="2025-01-30T15:53:13.683780013Z" level=info msg="StopPodSandbox for \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\"" Jan 30 15:53:13.684038 containerd[1577]: time="2025-01-30T15:53:13.683803477Z" level=info msg="Container to stop \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:53:13.684038 containerd[1577]: time="2025-01-30T15:53:13.683815700Z" level=info msg="Container to stop \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:53:13.684038 containerd[1577]: time="2025-01-30T15:53:13.683826250Z" level=info msg="Container to stop \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:53:13.684038 containerd[1577]: time="2025-01-30T15:53:13.683836349Z" level=info msg="Container to stop \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:53:13.684038 containerd[1577]: time="2025-01-30T15:53:13.683846588Z" level=info msg="Container to stop \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 15:53:13.687354 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae-shm.mount: Deactivated successfully. Jan 30 15:53:13.687532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1-shm.mount: Deactivated successfully. Jan 30 15:53:13.750875 containerd[1577]: time="2025-01-30T15:53:13.750182566Z" level=info msg="shim disconnected" id=3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1 namespace=k8s.io Jan 30 15:53:13.751020 containerd[1577]: time="2025-01-30T15:53:13.750584510Z" level=info msg="shim disconnected" id=2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae namespace=k8s.io Jan 30 15:53:13.751020 containerd[1577]: time="2025-01-30T15:53:13.750973159Z" level=warning msg="cleaning up after shim disconnected" id=2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae namespace=k8s.io Jan 30 15:53:13.751079 containerd[1577]: time="2025-01-30T15:53:13.751017302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:53:13.751406 containerd[1577]: time="2025-01-30T15:53:13.751217138Z" level=warning msg="cleaning up after shim disconnected" id=3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1 namespace=k8s.io Jan 30 15:53:13.751406 containerd[1577]: time="2025-01-30T15:53:13.751253957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:53:13.766908 containerd[1577]: time="2025-01-30T15:53:13.766832887Z" level=info msg="TearDown network for sandbox \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\" successfully" Jan 30 15:53:13.766908 containerd[1577]: time="2025-01-30T15:53:13.766867393Z" level=info msg="StopPodSandbox for \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\" returns successfully" Jan 30 15:53:13.784314 containerd[1577]: time="2025-01-30T15:53:13.784177311Z" level=info msg="TearDown network for sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" successfully" Jan 30 15:53:13.784314 containerd[1577]: time="2025-01-30T15:53:13.784209442Z" level=info msg="StopPodSandbox for \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" returns successfully" Jan 30 15:53:13.807616 kubelet[2855]: I0130 15:53:13.805525 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/395991b0-1e39-45bf-9a19-60ae6325572d-cilium-config-path\") pod \"395991b0-1e39-45bf-9a19-60ae6325572d\" (UID: \"395991b0-1e39-45bf-9a19-60ae6325572d\") " Jan 30 15:53:13.807616 kubelet[2855]: I0130 15:53:13.805569 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhsm7\" (UniqueName: \"kubernetes.io/projected/395991b0-1e39-45bf-9a19-60ae6325572d-kube-api-access-fhsm7\") pod \"395991b0-1e39-45bf-9a19-60ae6325572d\" (UID: \"395991b0-1e39-45bf-9a19-60ae6325572d\") " Jan 30 15:53:13.809861 kubelet[2855]: I0130 15:53:13.808603 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395991b0-1e39-45bf-9a19-60ae6325572d-kube-api-access-fhsm7" (OuterVolumeSpecName: "kube-api-access-fhsm7") pod "395991b0-1e39-45bf-9a19-60ae6325572d" (UID: "395991b0-1e39-45bf-9a19-60ae6325572d"). InnerVolumeSpecName "kube-api-access-fhsm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:53:13.810542 kubelet[2855]: I0130 15:53:13.810115 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/395991b0-1e39-45bf-9a19-60ae6325572d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "395991b0-1e39-45bf-9a19-60ae6325572d" (UID: "395991b0-1e39-45bf-9a19-60ae6325572d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:53:13.908556 kubelet[2855]: I0130 15:53:13.906445 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-lib-modules\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.908556 kubelet[2855]: I0130 15:53:13.906565 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-run\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.908556 kubelet[2855]: I0130 15:53:13.906604 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-etc-cni-netd\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.908556 kubelet[2855]: I0130 15:53:13.906639 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cni-path\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.908556 kubelet[2855]: I0130 15:53:13.906668 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-bpf-maps\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.908556 kubelet[2855]: I0130 15:53:13.906665 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.908966 kubelet[2855]: I0130 15:53:13.906701 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-host-proc-sys-net\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.908966 kubelet[2855]: I0130 15:53:13.906746 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw9lj\" (UniqueName: \"kubernetes.io/projected/62da5769-a556-4d65-ac40-a32e671ed2e5-kube-api-access-dw9lj\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.908966 kubelet[2855]: I0130 15:53:13.906759 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.908966 kubelet[2855]: I0130 15:53:13.906778 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-host-proc-sys-kernel\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.908966 kubelet[2855]: I0130 15:53:13.906799 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.909196 kubelet[2855]: I0130 15:53:13.906814 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62da5769-a556-4d65-ac40-a32e671ed2e5-clustermesh-secrets\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.909196 kubelet[2855]: I0130 15:53:13.906844 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-hostproc\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.909196 kubelet[2855]: I0130 15:53:13.906884 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-config-path\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.909196 kubelet[2855]: I0130 15:53:13.906920 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-xtables-lock\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.909196 kubelet[2855]: I0130 15:53:13.906954 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62da5769-a556-4d65-ac40-a32e671ed2e5-hubble-tls\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.909196 kubelet[2855]: I0130 15:53:13.906984 2855 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-cgroup\") pod \"62da5769-a556-4d65-ac40-a32e671ed2e5\" (UID: \"62da5769-a556-4d65-ac40-a32e671ed2e5\") " Jan 30 15:53:13.909458 kubelet[2855]: I0130 15:53:13.907042 2855 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-etc-cni-netd\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:13.909458 kubelet[2855]: I0130 15:53:13.907091 2855 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/395991b0-1e39-45bf-9a19-60ae6325572d-cilium-config-path\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:13.909458 kubelet[2855]: I0130 15:53:13.907111 2855 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fhsm7\" (UniqueName: \"kubernetes.io/projected/395991b0-1e39-45bf-9a19-60ae6325572d-kube-api-access-fhsm7\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:13.909458 kubelet[2855]: I0130 15:53:13.907133 2855 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-run\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:13.909458 kubelet[2855]: I0130 15:53:13.907150 2855 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-lib-modules\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:13.909458 kubelet[2855]: I0130 15:53:13.906841 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.909826 kubelet[2855]: I0130 15:53:13.906869 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cni-path" (OuterVolumeSpecName: "cni-path") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.909826 kubelet[2855]: I0130 15:53:13.906894 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.909826 kubelet[2855]: I0130 15:53:13.907207 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.909826 kubelet[2855]: I0130 15:53:13.907230 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-hostproc" (OuterVolumeSpecName: "hostproc") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.911778 kubelet[2855]: I0130 15:53:13.911720 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.912123 kubelet[2855]: I0130 15:53:13.912057 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:53:13.915015 kubelet[2855]: I0130 15:53:13.914935 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62da5769-a556-4d65-ac40-a32e671ed2e5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:53:13.916205 kubelet[2855]: I0130 15:53:13.916150 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:53:13.917823 kubelet[2855]: I0130 15:53:13.917764 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62da5769-a556-4d65-ac40-a32e671ed2e5-kube-api-access-dw9lj" (OuterVolumeSpecName: "kube-api-access-dw9lj") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "kube-api-access-dw9lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:53:13.919134 kubelet[2855]: I0130 15:53:13.919096 2855 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62da5769-a556-4d65-ac40-a32e671ed2e5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "62da5769-a556-4d65-ac40-a32e671ed2e5" (UID: "62da5769-a556-4d65-ac40-a32e671ed2e5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:53:14.008045 kubelet[2855]: I0130 15:53:14.007984 2855 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-host-proc-sys-kernel\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.008443 kubelet[2855]: I0130 15:53:14.008356 2855 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62da5769-a556-4d65-ac40-a32e671ed2e5-clustermesh-secrets\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.008759 kubelet[2855]: I0130 15:53:14.008706 2855 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-hostproc\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.008977 kubelet[2855]: I0130 15:53:14.008916 2855 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-config-path\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.009398 kubelet[2855]: I0130 15:53:14.009153 2855 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-xtables-lock\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.009398 kubelet[2855]: I0130 15:53:14.009242 2855 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62da5769-a556-4d65-ac40-a32e671ed2e5-hubble-tls\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.009398 kubelet[2855]: I0130 15:53:14.009308 2855 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cilium-cgroup\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.009398 kubelet[2855]: I0130 15:53:14.009335 2855 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-cni-path\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.009398 kubelet[2855]: I0130 15:53:14.009360 2855 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-bpf-maps\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.009950 kubelet[2855]: I0130 15:53:14.009830 2855 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62da5769-a556-4d65-ac40-a32e671ed2e5-host-proc-sys-net\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.009950 kubelet[2855]: I0130 15:53:14.009912 2855 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dw9lj\" (UniqueName: \"kubernetes.io/projected/62da5769-a556-4d65-ac40-a32e671ed2e5-kube-api-access-dw9lj\") on node \"ci-4081-3-0-c-6e27ecb2ae.novalocal\" DevicePath \"\"" Jan 30 15:53:14.210818 kubelet[2855]: I0130 15:53:14.210039 2855 scope.go:117] "RemoveContainer" containerID="9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62" Jan 30 15:53:14.224264 containerd[1577]: time="2025-01-30T15:53:14.224056665Z" level=info msg="RemoveContainer for \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\"" Jan 30 15:53:14.317486 containerd[1577]: time="2025-01-30T15:53:14.317403644Z" level=info msg="RemoveContainer for \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\" returns successfully" Jan 30 15:53:14.318321 kubelet[2855]: I0130 15:53:14.318240 2855 scope.go:117] "RemoveContainer" containerID="9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62" Jan 30 15:53:14.319757 containerd[1577]: time="2025-01-30T15:53:14.318871629Z" level=error msg="ContainerStatus for \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\": not found" Jan 30 15:53:14.319900 kubelet[2855]: E0130 15:53:14.319125 2855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\": not found" containerID="9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62" Jan 30 15:53:14.319900 kubelet[2855]: I0130 15:53:14.319188 2855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62"} err="failed to get container status \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\": rpc error: code = NotFound desc = an error occurred when try to find container \"9aa72b8cad125cd1c2e2eabdd82a3e1cbafe0d4af175a01df42c5036aa5a8d62\": not found" Jan 30 15:53:14.319900 kubelet[2855]: I0130 15:53:14.319317 2855 scope.go:117] "RemoveContainer" containerID="9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec" Jan 30 15:53:14.333827 containerd[1577]: time="2025-01-30T15:53:14.333726831Z" level=info msg="RemoveContainer for \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\"" Jan 30 15:53:14.347161 containerd[1577]: time="2025-01-30T15:53:14.347051693Z" level=info msg="RemoveContainer for \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\" returns successfully" Jan 30 15:53:14.348087 kubelet[2855]: I0130 15:53:14.347996 2855 scope.go:117] "RemoveContainer" containerID="1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea" Jan 30 15:53:14.351347 containerd[1577]: time="2025-01-30T15:53:14.351146416Z" level=info msg="RemoveContainer for \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\"" Jan 30 15:53:14.357430 containerd[1577]: time="2025-01-30T15:53:14.357356930Z" level=info msg="RemoveContainer for \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\" returns successfully" Jan 30 15:53:14.358098 kubelet[2855]: I0130 15:53:14.357882 2855 scope.go:117] "RemoveContainer" containerID="cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4" Jan 30 15:53:14.362594 containerd[1577]: time="2025-01-30T15:53:14.362482948Z" level=info msg="RemoveContainer for \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\"" Jan 30 15:53:14.368169 containerd[1577]: time="2025-01-30T15:53:14.368112149Z" level=info msg="RemoveContainer for \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\" returns successfully" Jan 30 15:53:14.368806 kubelet[2855]: I0130 15:53:14.368741 2855 scope.go:117] "RemoveContainer" containerID="ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166" Jan 30 15:53:14.371543 containerd[1577]: time="2025-01-30T15:53:14.371450203Z" level=info msg="RemoveContainer for \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\"" Jan 30 15:53:14.379022 containerd[1577]: time="2025-01-30T15:53:14.378850177Z" level=info msg="RemoveContainer for \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\" returns successfully" Jan 30 15:53:14.379632 kubelet[2855]: I0130 15:53:14.379377 2855 scope.go:117] "RemoveContainer" containerID="6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85" Jan 30 15:53:14.384287 containerd[1577]: time="2025-01-30T15:53:14.383712991Z" level=info msg="RemoveContainer for \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\"" Jan 30 15:53:14.388147 containerd[1577]: time="2025-01-30T15:53:14.388024071Z" level=info msg="RemoveContainer for \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\" returns successfully" Jan 30 15:53:14.388613 kubelet[2855]: I0130 15:53:14.388350 2855 scope.go:117] "RemoveContainer" containerID="9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec" Jan 30 15:53:14.389357 containerd[1577]: time="2025-01-30T15:53:14.388628174Z" level=error msg="ContainerStatus for \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\": not found" Jan 30 15:53:14.389860 kubelet[2855]: E0130 15:53:14.389617 2855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\": not found" containerID="9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec" Jan 30 15:53:14.389860 kubelet[2855]: I0130 15:53:14.389645 2855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec"} err="failed to get container status \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"9402264495a3a3ad9d71c4564ed6b703ee23b3ff76da8c92f74c7946bf2bc3ec\": not found" Jan 30 15:53:14.389860 kubelet[2855]: I0130 15:53:14.389670 2855 scope.go:117] "RemoveContainer" containerID="1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea" Jan 30 15:53:14.389968 containerd[1577]: time="2025-01-30T15:53:14.389811043Z" level=error msg="ContainerStatus for \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\": not found" Jan 30 15:53:14.390317 kubelet[2855]: E0130 15:53:14.390160 2855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\": not found" containerID="1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea" Jan 30 15:53:14.390317 kubelet[2855]: I0130 15:53:14.390196 2855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea"} err="failed to get container status \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d330107b897690c9980284262e7aad96ae3a265e913bdf165cf25a72c0957ea\": not found" Jan 30 15:53:14.390317 kubelet[2855]: I0130 15:53:14.390214 2855 scope.go:117] "RemoveContainer" containerID="cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4" Jan 30 15:53:14.390623 containerd[1577]: time="2025-01-30T15:53:14.390511448Z" level=error msg="ContainerStatus for \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\": not found" Jan 30 15:53:14.391561 kubelet[2855]: E0130 15:53:14.391212 2855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\": not found" containerID="cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4" Jan 30 15:53:14.391561 kubelet[2855]: I0130 15:53:14.391232 2855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4"} err="failed to get container status \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf0ebc8f098b60c214cdeae6c7bb76bf927ef552ae1ae1b5084c18a0701acfc4\": not found" Jan 30 15:53:14.391561 kubelet[2855]: I0130 15:53:14.391248 2855 scope.go:117] "RemoveContainer" containerID="ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166" Jan 30 15:53:14.392155 kubelet[2855]: E0130 15:53:14.392074 2855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\": not found" containerID="ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166" Jan 30 15:53:14.392155 kubelet[2855]: I0130 15:53:14.392095 2855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166"} err="failed to get container status \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\": not found" Jan 30 15:53:14.392311 containerd[1577]: time="2025-01-30T15:53:14.391945488Z" level=error msg="ContainerStatus for \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9f59b0b0475872da8d493fd882e2b026378e93336d17fe4cc120cae0c3e166\": not found" Jan 30 15:53:14.393610 kubelet[2855]: I0130 15:53:14.392821 2855 scope.go:117] "RemoveContainer" containerID="6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85" Jan 30 15:53:14.393610 kubelet[2855]: E0130 15:53:14.393490 2855 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\": not found" containerID="6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85" Jan 30 15:53:14.393610 kubelet[2855]: I0130 15:53:14.393532 2855 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85"} err="failed to get container status \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\": not found" Jan 30 15:53:14.393746 containerd[1577]: time="2025-01-30T15:53:14.393203799Z" level=error msg="ContainerStatus for \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c608f38ee992475ad6dbffe13e9d3d24080adedafe258251ee2b3040b175e85\": not found" Jan 30 15:53:14.556470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae-rootfs.mount: Deactivated successfully. Jan 30 15:53:14.557591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1-rootfs.mount: Deactivated successfully. Jan 30 15:53:14.557850 systemd[1]: var-lib-kubelet-pods-395991b0\x2d1e39\x2d45bf\x2d9a19\x2d60ae6325572d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfhsm7.mount: Deactivated successfully. Jan 30 15:53:14.558114 systemd[1]: var-lib-kubelet-pods-62da5769\x2da556\x2d4d65\x2dac40\x2da32e671ed2e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddw9lj.mount: Deactivated successfully. Jan 30 15:53:14.558843 systemd[1]: var-lib-kubelet-pods-62da5769\x2da556\x2d4d65\x2dac40\x2da32e671ed2e5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 15:53:14.559363 systemd[1]: var-lib-kubelet-pods-62da5769\x2da556\x2d4d65\x2dac40\x2da32e671ed2e5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 15:53:15.552743 sshd[4425]: pam_unix(sshd:session): session closed for user core Jan 30 15:53:15.559756 systemd[1]: Started sshd@24-172.24.4.96:22-172.24.4.1:44832.service - OpenSSH per-connection server daemon (172.24.4.1:44832). Jan 30 15:53:15.565734 systemd[1]: sshd@23-172.24.4.96:22-172.24.4.1:60862.service: Deactivated successfully. Jan 30 15:53:15.579934 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 15:53:15.583063 systemd-logind[1561]: Session 26 logged out. Waiting for processes to exit. Jan 30 15:53:15.587219 systemd-logind[1561]: Removed session 26. Jan 30 15:53:15.652494 kubelet[2855]: I0130 15:53:15.652411 2855 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395991b0-1e39-45bf-9a19-60ae6325572d" path="/var/lib/kubelet/pods/395991b0-1e39-45bf-9a19-60ae6325572d/volumes" Jan 30 15:53:15.653623 kubelet[2855]: I0130 15:53:15.653440 2855 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62da5769-a556-4d65-ac40-a32e671ed2e5" path="/var/lib/kubelet/pods/62da5769-a556-4d65-ac40-a32e671ed2e5/volumes" Jan 30 15:53:15.779722 kubelet[2855]: E0130 15:53:15.779653 2855 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:53:16.825417 sshd[4592]: Accepted publickey for core from 172.24.4.1 port 44832 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:53:16.828402 sshd[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:53:16.840892 systemd-logind[1561]: New session 27 of user core. Jan 30 15:53:16.850280 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 15:53:18.154393 kubelet[2855]: I0130 15:53:18.154039 2855 topology_manager.go:215] "Topology Admit Handler" podUID="b341c1f3-c8c3-46ff-8bc1-0f90148803da" podNamespace="kube-system" podName="cilium-qdr5j" Jan 30 15:53:18.154393 kubelet[2855]: E0130 15:53:18.154114 2855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62da5769-a556-4d65-ac40-a32e671ed2e5" containerName="mount-cgroup" Jan 30 15:53:18.154393 kubelet[2855]: E0130 15:53:18.154126 2855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62da5769-a556-4d65-ac40-a32e671ed2e5" containerName="apply-sysctl-overwrites" Jan 30 15:53:18.154393 kubelet[2855]: E0130 15:53:18.154133 2855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="395991b0-1e39-45bf-9a19-60ae6325572d" containerName="cilium-operator" Jan 30 15:53:18.154393 kubelet[2855]: E0130 15:53:18.154140 2855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62da5769-a556-4d65-ac40-a32e671ed2e5" containerName="clean-cilium-state" Jan 30 15:53:18.154393 kubelet[2855]: E0130 15:53:18.154147 2855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62da5769-a556-4d65-ac40-a32e671ed2e5" containerName="mount-bpf-fs" Jan 30 15:53:18.154393 kubelet[2855]: E0130 15:53:18.154154 2855 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62da5769-a556-4d65-ac40-a32e671ed2e5" containerName="cilium-agent" Jan 30 15:53:18.154393 kubelet[2855]: I0130 15:53:18.154177 2855 memory_manager.go:354] "RemoveStaleState removing state" podUID="62da5769-a556-4d65-ac40-a32e671ed2e5" containerName="cilium-agent" Jan 30 15:53:18.154393 kubelet[2855]: I0130 15:53:18.154184 2855 memory_manager.go:354] "RemoveStaleState removing state" podUID="395991b0-1e39-45bf-9a19-60ae6325572d" containerName="cilium-operator" Jan 30 15:53:18.242216 kubelet[2855]: I0130 15:53:18.241241 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-cni-path\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242216 kubelet[2855]: I0130 15:53:18.241326 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b341c1f3-c8c3-46ff-8bc1-0f90148803da-hubble-tls\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242216 kubelet[2855]: I0130 15:53:18.241382 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-host-proc-sys-kernel\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242216 kubelet[2855]: I0130 15:53:18.241430 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-lib-modules\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242216 kubelet[2855]: I0130 15:53:18.241474 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-xtables-lock\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242216 kubelet[2855]: I0130 15:53:18.241558 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-hostproc\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242834 kubelet[2855]: I0130 15:53:18.241625 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbfp\" (UniqueName: \"kubernetes.io/projected/b341c1f3-c8c3-46ff-8bc1-0f90148803da-kube-api-access-nlbfp\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242834 kubelet[2855]: I0130 15:53:18.241672 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-cilium-run\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242834 kubelet[2855]: I0130 15:53:18.241716 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b341c1f3-c8c3-46ff-8bc1-0f90148803da-cilium-ipsec-secrets\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242834 kubelet[2855]: I0130 15:53:18.241773 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-host-proc-sys-net\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.242834 kubelet[2855]: I0130 15:53:18.241845 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b341c1f3-c8c3-46ff-8bc1-0f90148803da-cilium-config-path\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.243135 kubelet[2855]: I0130 15:53:18.241896 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-bpf-maps\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.243135 kubelet[2855]: I0130 15:53:18.241936 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b341c1f3-c8c3-46ff-8bc1-0f90148803da-clustermesh-secrets\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.243135 kubelet[2855]: I0130 15:53:18.241979 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-cilium-cgroup\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.243135 kubelet[2855]: I0130 15:53:18.242017 2855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b341c1f3-c8c3-46ff-8bc1-0f90148803da-etc-cni-netd\") pod \"cilium-qdr5j\" (UID: \"b341c1f3-c8c3-46ff-8bc1-0f90148803da\") " pod="kube-system/cilium-qdr5j" Jan 30 15:53:18.371807 sshd[4592]: pam_unix(sshd:session): session closed for user core Jan 30 15:53:18.415657 systemd[1]: sshd@24-172.24.4.96:22-172.24.4.1:44832.service: Deactivated successfully. Jan 30 15:53:18.417208 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 15:53:18.426196 systemd-logind[1561]: Session 27 logged out. Waiting for processes to exit. Jan 30 15:53:18.432897 systemd[1]: Started sshd@25-172.24.4.96:22-172.24.4.1:44836.service - OpenSSH per-connection server daemon (172.24.4.1:44836). Jan 30 15:53:18.435600 systemd-logind[1561]: Removed session 27. Jan 30 15:53:18.461338 containerd[1577]: time="2025-01-30T15:53:18.461275150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qdr5j,Uid:b341c1f3-c8c3-46ff-8bc1-0f90148803da,Namespace:kube-system,Attempt:0,}" Jan 30 15:53:18.488201 containerd[1577]: time="2025-01-30T15:53:18.488060618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 15:53:18.488201 containerd[1577]: time="2025-01-30T15:53:18.488152701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 15:53:18.488201 containerd[1577]: time="2025-01-30T15:53:18.488189080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:53:18.488706 containerd[1577]: time="2025-01-30T15:53:18.488330626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 15:53:18.524867 containerd[1577]: time="2025-01-30T15:53:18.524583652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qdr5j,Uid:b341c1f3-c8c3-46ff-8bc1-0f90148803da,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\"" Jan 30 15:53:18.528365 containerd[1577]: time="2025-01-30T15:53:18.528316843Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 15:53:18.546812 containerd[1577]: time="2025-01-30T15:53:18.546735926Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b996f90d1cf91326dc642c54a6d89fbc3c48a25a6892a4646851f6e7f2e40398\"" Jan 30 15:53:18.547455 containerd[1577]: time="2025-01-30T15:53:18.547411789Z" level=info msg="StartContainer for \"b996f90d1cf91326dc642c54a6d89fbc3c48a25a6892a4646851f6e7f2e40398\"" Jan 30 15:53:18.602545 containerd[1577]: time="2025-01-30T15:53:18.601809642Z" level=info msg="StartContainer for \"b996f90d1cf91326dc642c54a6d89fbc3c48a25a6892a4646851f6e7f2e40398\" returns successfully" Jan 30 15:53:18.669043 containerd[1577]: time="2025-01-30T15:53:18.668483493Z" level=info msg="shim disconnected" id=b996f90d1cf91326dc642c54a6d89fbc3c48a25a6892a4646851f6e7f2e40398 namespace=k8s.io Jan 30 15:53:18.669043 containerd[1577]: time="2025-01-30T15:53:18.668608639Z" level=warning msg="cleaning up after shim disconnected" id=b996f90d1cf91326dc642c54a6d89fbc3c48a25a6892a4646851f6e7f2e40398 namespace=k8s.io Jan 30 15:53:18.669043 containerd[1577]: time="2025-01-30T15:53:18.668632123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:53:19.261335 containerd[1577]: time="2025-01-30T15:53:19.260970569Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 15:53:19.660679 containerd[1577]: time="2025-01-30T15:53:19.660434000Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fe5d58aedaa8c4371ac7399d50406e166911849eeae496afa63912eb2baed53b\"" Jan 30 15:53:19.662793 containerd[1577]: time="2025-01-30T15:53:19.661979039Z" level=info msg="StartContainer for \"fe5d58aedaa8c4371ac7399d50406e166911849eeae496afa63912eb2baed53b\"" Jan 30 15:53:19.703401 sshd[4613]: Accepted publickey for core from 172.24.4.1 port 44836 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:53:19.705472 sshd[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:53:19.727794 systemd-logind[1561]: New session 28 of user core. Jan 30 15:53:19.735021 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 15:53:19.787328 kubelet[2855]: I0130 15:53:19.787021 2855 setters.go:580] "Node became not ready" node="ci-4081-3-0-c-6e27ecb2ae.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T15:53:19Z","lastTransitionTime":"2025-01-30T15:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 15:53:20.156594 containerd[1577]: time="2025-01-30T15:53:20.156272266Z" level=info msg="StartContainer for \"fe5d58aedaa8c4371ac7399d50406e166911849eeae496afa63912eb2baed53b\" returns successfully" Jan 30 15:53:20.327592 containerd[1577]: time="2025-01-30T15:53:20.326592845Z" level=info msg="shim disconnected" id=fe5d58aedaa8c4371ac7399d50406e166911849eeae496afa63912eb2baed53b namespace=k8s.io Jan 30 15:53:20.327592 containerd[1577]: time="2025-01-30T15:53:20.326724563Z" level=warning msg="cleaning up after shim disconnected" id=fe5d58aedaa8c4371ac7399d50406e166911849eeae496afa63912eb2baed53b namespace=k8s.io Jan 30 15:53:20.327592 containerd[1577]: time="2025-01-30T15:53:20.326749420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:53:20.361480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe5d58aedaa8c4371ac7399d50406e166911849eeae496afa63912eb2baed53b-rootfs.mount: Deactivated successfully. Jan 30 15:53:20.426449 sshd[4613]: pam_unix(sshd:session): session closed for user core Jan 30 15:53:20.430769 systemd[1]: Started sshd@26-172.24.4.96:22-172.24.4.1:44844.service - OpenSSH per-connection server daemon (172.24.4.1:44844). Jan 30 15:53:20.431201 systemd[1]: sshd@25-172.24.4.96:22-172.24.4.1:44836.service: Deactivated successfully. Jan 30 15:53:20.435142 systemd-logind[1561]: Session 28 logged out. Waiting for processes to exit. Jan 30 15:53:20.437353 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 15:53:20.438862 systemd-logind[1561]: Removed session 28. Jan 30 15:53:20.782188 kubelet[2855]: E0130 15:53:20.781931 2855 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 15:53:21.280596 containerd[1577]: time="2025-01-30T15:53:21.280547271Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 15:53:21.360103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743571158.mount: Deactivated successfully. Jan 30 15:53:21.447997 containerd[1577]: time="2025-01-30T15:53:21.447833361Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"595169838b57bf2e04d88005e8f5563f7e78d912b5e95430a387ba9e5cb13eea\"" Jan 30 15:53:21.449086 containerd[1577]: time="2025-01-30T15:53:21.448652524Z" level=info msg="StartContainer for \"595169838b57bf2e04d88005e8f5563f7e78d912b5e95430a387ba9e5cb13eea\"" Jan 30 15:53:21.554413 containerd[1577]: time="2025-01-30T15:53:21.554297729Z" level=info msg="StartContainer for \"595169838b57bf2e04d88005e8f5563f7e78d912b5e95430a387ba9e5cb13eea\" returns successfully" Jan 30 15:53:21.572078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-595169838b57bf2e04d88005e8f5563f7e78d912b5e95430a387ba9e5cb13eea-rootfs.mount: Deactivated successfully. Jan 30 15:53:21.579989 containerd[1577]: time="2025-01-30T15:53:21.579879784Z" level=info msg="shim disconnected" id=595169838b57bf2e04d88005e8f5563f7e78d912b5e95430a387ba9e5cb13eea namespace=k8s.io Jan 30 15:53:21.579989 containerd[1577]: time="2025-01-30T15:53:21.579930259Z" level=warning msg="cleaning up after shim disconnected" id=595169838b57bf2e04d88005e8f5563f7e78d912b5e95430a387ba9e5cb13eea namespace=k8s.io Jan 30 15:53:21.579989 containerd[1577]: time="2025-01-30T15:53:21.579953593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:53:21.695529 sshd[4787]: Accepted publickey for core from 172.24.4.1 port 44844 ssh2: RSA SHA256:FgldunhGUdcY/K9zdh7KCnsBf8GB30TJ+uvCgkWU8UI Jan 30 15:53:21.698342 sshd[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 15:53:21.708781 systemd-logind[1561]: New session 29 of user core. Jan 30 15:53:21.714022 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 15:53:22.296799 containerd[1577]: time="2025-01-30T15:53:22.296744206Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 15:53:22.325848 containerd[1577]: time="2025-01-30T15:53:22.325786734Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a8c8279e1033da9cc70a75385e97aa813d7b88752a3bb8a9bbfce72033568bf3\"" Jan 30 15:53:22.326514 containerd[1577]: time="2025-01-30T15:53:22.326455173Z" level=info msg="StartContainer for \"a8c8279e1033da9cc70a75385e97aa813d7b88752a3bb8a9bbfce72033568bf3\"" Jan 30 15:53:22.392951 containerd[1577]: time="2025-01-30T15:53:22.391876309Z" level=info msg="StartContainer for \"a8c8279e1033da9cc70a75385e97aa813d7b88752a3bb8a9bbfce72033568bf3\" returns successfully" Jan 30 15:53:22.406072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8c8279e1033da9cc70a75385e97aa813d7b88752a3bb8a9bbfce72033568bf3-rootfs.mount: Deactivated successfully. Jan 30 15:53:22.416530 containerd[1577]: time="2025-01-30T15:53:22.416287087Z" level=info msg="shim disconnected" id=a8c8279e1033da9cc70a75385e97aa813d7b88752a3bb8a9bbfce72033568bf3 namespace=k8s.io Jan 30 15:53:22.416530 containerd[1577]: time="2025-01-30T15:53:22.416375282Z" level=warning msg="cleaning up after shim disconnected" id=a8c8279e1033da9cc70a75385e97aa813d7b88752a3bb8a9bbfce72033568bf3 namespace=k8s.io Jan 30 15:53:22.416530 containerd[1577]: time="2025-01-30T15:53:22.416386263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 15:53:23.311710 containerd[1577]: time="2025-01-30T15:53:23.310814194Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 15:53:23.372440 containerd[1577]: time="2025-01-30T15:53:23.372197935Z" level=info msg="CreateContainer within sandbox \"dbe19116eab42b8950768357434697b52a28afdd21b79b75758cb22f976b872a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f12849a1015396457ce8bd30bc4afc672ce586e0e01a3c0f3d938a462d6a908\"" Jan 30 15:53:23.376126 containerd[1577]: time="2025-01-30T15:53:23.376084681Z" level=info msg="StartContainer for \"5f12849a1015396457ce8bd30bc4afc672ce586e0e01a3c0f3d938a462d6a908\"" Jan 30 15:53:23.406117 systemd[1]: run-containerd-runc-k8s.io-5f12849a1015396457ce8bd30bc4afc672ce586e0e01a3c0f3d938a462d6a908-runc.Herd1o.mount: Deactivated successfully. Jan 30 15:53:23.441755 containerd[1577]: time="2025-01-30T15:53:23.441708774Z" level=info msg="StartContainer for \"5f12849a1015396457ce8bd30bc4afc672ce586e0e01a3c0f3d938a462d6a908\" returns successfully" Jan 30 15:53:23.775537 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 15:53:23.827559 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 30 15:53:24.487009 systemd[1]: run-containerd-runc-k8s.io-5f12849a1015396457ce8bd30bc4afc672ce586e0e01a3c0f3d938a462d6a908-runc.yxiqnu.mount: Deactivated successfully. Jan 30 15:53:24.536921 kubelet[2855]: E0130 15:53:24.536865 2855 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37020->127.0.0.1:46573: write tcp 127.0.0.1:37020->127.0.0.1:46573: write: broken pipe Jan 30 15:53:26.728993 kubelet[2855]: E0130 15:53:26.728955 2855 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57198->127.0.0.1:46573: write tcp 127.0.0.1:57198->127.0.0.1:46573: write: broken pipe Jan 30 15:53:26.968531 systemd-networkd[1214]: lxc_health: Link UP Jan 30 15:53:26.977650 systemd-networkd[1214]: lxc_health: Gained carrier Jan 30 15:53:28.502483 kubelet[2855]: I0130 15:53:28.502379 2855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qdr5j" podStartSLOduration=10.502347481 podStartE2EDuration="10.502347481s" podCreationTimestamp="2025-01-30 15:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 15:53:24.377654701 +0000 UTC m=+168.870003786" watchObservedRunningTime="2025-01-30 15:53:28.502347481 +0000 UTC m=+172.994696535" Jan 30 15:53:28.799057 systemd-networkd[1214]: lxc_health: Gained IPv6LL Jan 30 15:53:33.458055 sshd[4787]: pam_unix(sshd:session): session closed for user core Jan 30 15:53:33.467905 systemd[1]: sshd@26-172.24.4.96:22-172.24.4.1:44844.service: Deactivated successfully. Jan 30 15:53:33.475583 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 15:53:33.475882 systemd-logind[1561]: Session 29 logged out. Waiting for processes to exit. Jan 30 15:53:33.479717 systemd-logind[1561]: Removed session 29. Jan 30 15:53:35.674091 containerd[1577]: time="2025-01-30T15:53:35.673924161Z" level=info msg="StopPodSandbox for \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\"" Jan 30 15:53:35.674862 containerd[1577]: time="2025-01-30T15:53:35.674102056Z" level=info msg="TearDown network for sandbox \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\" successfully" Jan 30 15:53:35.674862 containerd[1577]: time="2025-01-30T15:53:35.674134857Z" level=info msg="StopPodSandbox for \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\" returns successfully" Jan 30 15:53:35.675299 containerd[1577]: time="2025-01-30T15:53:35.675245327Z" level=info msg="RemovePodSandbox for \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\"" Jan 30 15:53:35.675425 containerd[1577]: time="2025-01-30T15:53:35.675314356Z" level=info msg="Forcibly stopping sandbox \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\"" Jan 30 15:53:35.675665 containerd[1577]: time="2025-01-30T15:53:35.675425876Z" level=info msg="TearDown network for sandbox \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\" successfully" Jan 30 15:53:35.979931 containerd[1577]: time="2025-01-30T15:53:35.979539212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:53:35.979931 containerd[1577]: time="2025-01-30T15:53:35.979724801Z" level=info msg="RemovePodSandbox \"2ade7a418e19cf9224e697a301dff7c485315bc97ff0880f832374f486e742ae\" returns successfully" Jan 30 15:53:35.981359 containerd[1577]: time="2025-01-30T15:53:35.980822606Z" level=info msg="StopPodSandbox for \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\"" Jan 30 15:53:35.982059 containerd[1577]: time="2025-01-30T15:53:35.981595421Z" level=info msg="TearDown network for sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" successfully" Jan 30 15:53:35.982059 containerd[1577]: time="2025-01-30T15:53:35.981659722Z" level=info msg="StopPodSandbox for \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" returns successfully" Jan 30 15:53:35.982663 containerd[1577]: time="2025-01-30T15:53:35.982539668Z" level=info msg="RemovePodSandbox for \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\"" Jan 30 15:53:35.982663 containerd[1577]: time="2025-01-30T15:53:35.982617854Z" level=info msg="Forcibly stopping sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\"" Jan 30 15:53:35.982822 containerd[1577]: time="2025-01-30T15:53:35.982782604Z" level=info msg="TearDown network for sandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" successfully" Jan 30 15:53:36.063929 containerd[1577]: time="2025-01-30T15:53:36.063808570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 15:53:36.063929 containerd[1577]: time="2025-01-30T15:53:36.063927003Z" level=info msg="RemovePodSandbox \"3048f00e66b4a293faf391b9bf70f4bf2d7b955fc4a4558b9d955ed89649faf1\" returns successfully"