Jan 13 21:52:11.082505 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:52:11.082569 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:52:11.082593 kernel: BIOS-provided physical RAM map: Jan 13 21:52:11.082611 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:52:11.082628 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:52:11.082651 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:52:11.082671 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 13 21:52:11.082689 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 13 21:52:11.082707 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:52:11.082725 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:52:11.082743 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 13 21:52:11.082761 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:52:11.082779 kernel: NX (Execute Disable) protection: active Jan 13 21:52:11.082797 kernel: APIC: Static calls initialized Jan 13 21:52:11.082823 kernel: SMBIOS 3.0.0 present. Jan 13 21:52:11.082842 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 13 21:52:11.082861 kernel: Hypervisor detected: KVM Jan 13 21:52:11.082880 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:52:11.082899 kernel: kvm-clock: using sched offset of 3343040987 cycles Jan 13 21:52:11.082956 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:52:11.083006 kernel: tsc: Detected 1996.249 MHz processor Jan 13 21:52:11.083026 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:52:11.083046 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:52:11.083065 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 13 21:52:11.083085 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:52:11.083105 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:52:11.083124 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 13 21:52:11.083143 kernel: ACPI: Early table checksum verification disabled Jan 13 21:52:11.083169 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 13 21:52:11.083209 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:52:11.083229 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:52:11.083248 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:52:11.083266 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 13 21:52:11.083286 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:52:11.083305 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:52:11.083324 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 13 21:52:11.083343 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 13 21:52:11.083369 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 13 21:52:11.083388 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 13 21:52:11.083407 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 13 21:52:11.083434 kernel: No NUMA configuration found Jan 13 21:52:11.083454 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 13 21:52:11.083474 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 13 21:52:11.083500 kernel: Zone ranges: Jan 13 21:52:11.083520 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:52:11.083541 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:52:11.083561 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:52:11.083581 kernel: Movable zone start for each node Jan 13 21:52:11.083601 kernel: Early memory node ranges Jan 13 21:52:11.083621 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:52:11.083640 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 13 21:52:11.083665 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 13 21:52:11.083685 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 13 21:52:11.083705 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:52:11.083725 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:52:11.083745 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 13 21:52:11.083765 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:52:11.083785 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:52:11.083805 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:52:11.083826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:52:11.083850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:52:11.083871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:52:11.083891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:52:11.083911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:52:11.083931 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:52:11.083951 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:52:11.086080 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:52:11.086105 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 13 21:52:11.086126 kernel: Booting paravirtualized kernel on KVM Jan 13 21:52:11.086157 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:52:11.086178 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:52:11.086199 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:52:11.086220 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:52:11.086239 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:52:11.086259 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 13 21:52:11.086283 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:52:11.086306 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:52:11.086331 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:52:11.086351 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:52:11.086372 kernel: Fallback order for Node 0: 0 Jan 13 21:52:11.086392 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 13 21:52:11.086412 kernel: Policy zone: Normal Jan 13 21:52:11.086432 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:52:11.086452 kernel: software IO TLB: area num 2. Jan 13 21:52:11.086474 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 13 21:52:11.086495 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:52:11.086520 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:52:11.086540 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:52:11.086561 kernel: Dynamic Preempt: voluntary Jan 13 21:52:11.086581 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:52:11.086603 kernel: rcu: RCU event tracing is enabled. Jan 13 21:52:11.086625 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:52:11.086646 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:52:11.086667 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:52:11.086687 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:52:11.086707 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:52:11.086733 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:52:11.086753 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:52:11.086774 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:52:11.086794 kernel: Console: colour VGA+ 80x25 Jan 13 21:52:11.086814 kernel: printk: console [tty0] enabled Jan 13 21:52:11.086834 kernel: printk: console [ttyS0] enabled Jan 13 21:52:11.086854 kernel: ACPI: Core revision 20230628 Jan 13 21:52:11.086874 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:52:11.086894 kernel: x2apic enabled Jan 13 21:52:11.086949 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:52:11.086997 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:52:11.087018 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:52:11.087039 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 13 21:52:11.087059 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 21:52:11.087079 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 21:52:11.087100 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:52:11.087120 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:52:11.087141 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:52:11.087169 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:52:11.087189 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:52:11.087209 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 13 21:52:11.087230 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:52:11.087265 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:52:11.087291 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:52:11.087312 kernel: landlock: Up and running. Jan 13 21:52:11.087334 kernel: SELinux: Initializing. Jan 13 21:52:11.087355 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:52:11.087377 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:52:11.087399 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 13 21:52:11.087426 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:52:11.087448 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:52:11.087470 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:52:11.087491 kernel: Performance Events: AMD PMU driver. Jan 13 21:52:11.087512 kernel: ... version: 0 Jan 13 21:52:11.087538 kernel: ... bit width: 48 Jan 13 21:52:11.087559 kernel: ... generic registers: 4 Jan 13 21:52:11.087580 kernel: ... value mask: 0000ffffffffffff Jan 13 21:52:11.087602 kernel: ... max period: 00007fffffffffff Jan 13 21:52:11.087623 kernel: ... fixed-purpose events: 0 Jan 13 21:52:11.087644 kernel: ... event mask: 000000000000000f Jan 13 21:52:11.087665 kernel: signal: max sigframe size: 1440 Jan 13 21:52:11.087687 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:52:11.087709 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:52:11.087734 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:52:11.087755 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:52:11.087776 kernel: .... node #0, CPUs: #1 Jan 13 21:52:11.087797 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:52:11.087818 kernel: smpboot: Max logical packages: 2 Jan 13 21:52:11.087840 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 13 21:52:11.087861 kernel: devtmpfs: initialized Jan 13 21:52:11.087881 kernel: x86/mm: Memory block size: 128MB Jan 13 21:52:11.087903 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:52:11.087925 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:52:11.087951 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:52:11.090040 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:52:11.090065 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:52:11.090087 kernel: audit: type=2000 audit(1736805129.994:1): state=initialized audit_enabled=0 res=1 Jan 13 21:52:11.090109 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:52:11.090130 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:52:11.090152 kernel: cpuidle: using governor menu Jan 13 21:52:11.090173 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:52:11.090195 kernel: dca service started, version 1.12.1 Jan 13 21:52:11.090226 kernel: PCI: Using configuration type 1 for base access Jan 13 21:52:11.090248 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:52:11.090269 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:52:11.090291 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:52:11.090313 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:52:11.090334 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:52:11.090355 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:52:11.090377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:52:11.090398 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:52:11.090425 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:52:11.090447 kernel: ACPI: Interpreter enabled Jan 13 21:52:11.090468 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:52:11.090489 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:52:11.090511 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:52:11.090532 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:52:11.090555 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 13 21:52:11.090576 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:52:11.090951 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:52:11.091260 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:52:11.091486 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:52:11.091519 kernel: acpiphp: Slot [3] registered Jan 13 21:52:11.091541 kernel: acpiphp: Slot [4] registered Jan 13 21:52:11.091563 kernel: acpiphp: Slot [5] registered Jan 13 21:52:11.091584 kernel: acpiphp: Slot [6] registered Jan 13 21:52:11.091606 kernel: acpiphp: Slot [7] registered Jan 13 21:52:11.091646 kernel: acpiphp: Slot [8] registered Jan 13 21:52:11.091676 kernel: acpiphp: Slot [9] registered Jan 13 21:52:11.091707 kernel: acpiphp: Slot [10] registered Jan 13 21:52:11.091741 kernel: acpiphp: Slot [11] registered Jan 13 21:52:11.091775 kernel: acpiphp: Slot [12] registered Jan 13 21:52:11.091806 kernel: acpiphp: Slot [13] registered Jan 13 21:52:11.091827 kernel: acpiphp: Slot [14] registered Jan 13 21:52:11.091848 kernel: acpiphp: Slot [15] registered Jan 13 21:52:11.091869 kernel: acpiphp: Slot [16] registered Jan 13 21:52:11.091899 kernel: acpiphp: Slot [17] registered Jan 13 21:52:11.091920 kernel: acpiphp: Slot [18] registered Jan 13 21:52:11.091942 kernel: acpiphp: Slot [19] registered Jan 13 21:52:11.094022 kernel: acpiphp: Slot [20] registered Jan 13 21:52:11.094053 kernel: acpiphp: Slot [21] registered Jan 13 21:52:11.094074 kernel: acpiphp: Slot [22] registered Jan 13 21:52:11.094096 kernel: acpiphp: Slot [23] registered Jan 13 21:52:11.094118 kernel: acpiphp: Slot [24] registered Jan 13 21:52:11.094139 kernel: acpiphp: Slot [25] registered Jan 13 21:52:11.094160 kernel: acpiphp: Slot [26] registered Jan 13 21:52:11.094190 kernel: acpiphp: Slot [27] registered Jan 13 21:52:11.094211 kernel: acpiphp: Slot [28] registered Jan 13 21:52:11.094233 kernel: acpiphp: Slot [29] registered Jan 13 21:52:11.094254 kernel: acpiphp: Slot [30] registered Jan 13 21:52:11.094276 kernel: acpiphp: Slot [31] registered Jan 13 21:52:11.094298 kernel: PCI host bridge to bus 0000:00 Jan 13 21:52:11.094556 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:52:11.094764 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:52:11.095185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:52:11.095393 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:52:11.095589 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 13 21:52:11.095784 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:52:11.096083 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:52:11.096339 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:52:11.096602 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 13 21:52:11.096833 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 13 21:52:11.097115 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 13 21:52:11.097342 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 13 21:52:11.097566 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 13 21:52:11.097792 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 13 21:52:11.098105 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:52:11.098351 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 13 21:52:11.098574 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 13 21:52:11.098818 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 13 21:52:11.099115 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 13 21:52:11.099346 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 13 21:52:11.099573 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 13 21:52:11.099799 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 13 21:52:11.102091 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:52:11.102354 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:52:11.102585 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 13 21:52:11.102813 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 13 21:52:11.103178 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 13 21:52:11.103404 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 13 21:52:11.103639 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:52:11.103877 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:52:11.106165 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 13 21:52:11.106348 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 13 21:52:11.106532 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 13 21:52:11.106703 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 13 21:52:11.106944 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 13 21:52:11.107181 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:52:11.107363 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 13 21:52:11.107530 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 13 21:52:11.107696 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 13 21:52:11.107721 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:52:11.107738 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:52:11.107754 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:52:11.107771 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:52:11.107787 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:52:11.107809 kernel: iommu: Default domain type: Translated Jan 13 21:52:11.107826 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:52:11.107842 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:52:11.107858 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:52:11.107874 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:52:11.107890 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 13 21:52:11.108080 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 13 21:52:11.108175 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 13 21:52:11.108270 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:52:11.108283 kernel: vgaarb: loaded Jan 13 21:52:11.108292 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:52:11.108301 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:52:11.108310 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:52:11.108319 kernel: pnp: PnP ACPI init Jan 13 21:52:11.108408 kernel: pnp 00:03: [dma 2] Jan 13 21:52:11.108424 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:52:11.108433 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:52:11.108446 kernel: NET: Registered PF_INET protocol family Jan 13 21:52:11.108455 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:52:11.108464 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:52:11.108473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:52:11.108481 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:52:11.108490 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:52:11.108499 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:52:11.108508 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:52:11.108519 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:52:11.108528 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:52:11.108537 kernel: NET: Registered PF_XDP protocol family Jan 13 21:52:11.108617 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:52:11.108696 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:52:11.108831 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:52:11.108946 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 13 21:52:11.109068 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 13 21:52:11.109160 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 13 21:52:11.109257 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:52:11.109270 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:52:11.109280 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:52:11.109289 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 13 21:52:11.109298 kernel: Initialise system trusted keyrings Jan 13 21:52:11.109307 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:52:11.109316 kernel: Key type asymmetric registered Jan 13 21:52:11.109325 kernel: Asymmetric key parser 'x509' registered Jan 13 21:52:11.109338 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:52:11.109347 kernel: io scheduler mq-deadline registered Jan 13 21:52:11.109356 kernel: io scheduler kyber registered Jan 13 21:52:11.109364 kernel: io scheduler bfq registered Jan 13 21:52:11.109373 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:52:11.109383 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 13 21:52:11.109392 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:52:11.109401 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:52:11.109410 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:52:11.109421 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:52:11.109430 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:52:11.109438 kernel: random: crng init done Jan 13 21:52:11.109448 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:52:11.109457 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:52:11.109465 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:52:11.109560 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:52:11.109575 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:52:11.109654 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:52:11.109741 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:52:10 UTC (1736805130) Jan 13 21:52:11.109823 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 21:52:11.109836 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:52:11.109845 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:52:11.109854 kernel: Segment Routing with IPv6 Jan 13 21:52:11.109862 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:52:11.109871 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:52:11.109880 kernel: Key type dns_resolver registered Jan 13 21:52:11.109892 kernel: IPI shorthand broadcast: enabled Jan 13 21:52:11.109901 kernel: sched_clock: Marking stable (981016968, 172797024)->(1185764401, -31950409) Jan 13 21:52:11.109910 kernel: registered taskstats version 1 Jan 13 21:52:11.109919 kernel: Loading compiled-in X.509 certificates Jan 13 21:52:11.109928 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:52:11.109937 kernel: Key type .fscrypt registered Jan 13 21:52:11.109946 kernel: Key type fscrypt-provisioning registered Jan 13 21:52:11.109969 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:52:11.109978 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:52:11.109990 kernel: ima: No architecture policies found Jan 13 21:52:11.109999 kernel: clk: Disabling unused clocks Jan 13 21:52:11.110007 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:52:11.110016 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:52:11.110025 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:52:11.110034 kernel: Run /init as init process Jan 13 21:52:11.110042 kernel: with arguments: Jan 13 21:52:11.110051 kernel: /init Jan 13 21:52:11.110059 kernel: with environment: Jan 13 21:52:11.110070 kernel: HOME=/ Jan 13 21:52:11.110079 kernel: TERM=linux Jan 13 21:52:11.110087 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:52:11.110099 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:52:11.110116 systemd[1]: Detected virtualization kvm. Jan 13 21:52:11.110137 systemd[1]: Detected architecture x86-64. Jan 13 21:52:11.110146 systemd[1]: Running in initrd. Jan 13 21:52:11.110159 systemd[1]: No hostname configured, using default hostname. Jan 13 21:52:11.110169 systemd[1]: Hostname set to . Jan 13 21:52:11.110179 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:52:11.110188 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:52:11.110198 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:52:11.110207 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:52:11.110218 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:52:11.110238 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:52:11.110251 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:52:11.110261 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:52:11.110272 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:52:11.110282 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:52:11.110294 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:52:11.110304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:52:11.110314 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:52:11.110324 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:52:11.110333 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:52:11.110343 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:52:11.110353 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:52:11.110362 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:52:11.110372 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:52:11.110384 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:52:11.110394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:52:11.110404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:52:11.110414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:52:11.110424 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:52:11.110433 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:52:11.110443 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:52:11.110453 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:52:11.110462 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:52:11.110474 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:52:11.110484 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:52:11.110494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:52:11.110503 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:52:11.110535 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 21:52:11.110562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:52:11.110572 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:52:11.110588 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:52:11.110598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:52:11.110607 kernel: Bridge firewalling registered Jan 13 21:52:11.110618 systemd-journald[184]: Journal started Jan 13 21:52:11.110642 systemd-journald[184]: Runtime Journal (/run/log/journal/4cc2a4c0cd98424d89e8572c9ffdd8ea) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:52:11.052206 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 21:52:11.154719 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:52:11.110147 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 21:52:11.160625 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:52:11.161375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:52:11.162187 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:52:11.173125 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:52:11.179124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:52:11.185124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:52:11.190176 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:52:11.202715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:52:11.212288 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:52:11.216046 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:52:11.217603 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:52:11.219087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:52:11.223935 dracut-cmdline[212]: dracut-dracut-053 Jan 13 21:52:11.226010 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:52:11.232606 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:52:11.271872 systemd-resolved[228]: Positive Trust Anchors: Jan 13 21:52:11.271905 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:52:11.271951 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:52:11.275110 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 13 21:52:11.276334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:52:11.277166 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:52:11.323031 kernel: SCSI subsystem initialized Jan 13 21:52:11.334015 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:52:11.346049 kernel: iscsi: registered transport (tcp) Jan 13 21:52:11.369486 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:52:11.369532 kernel: QLogic iSCSI HBA Driver Jan 13 21:52:11.437386 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:52:11.446238 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:52:11.522143 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:52:11.522306 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:52:11.526998 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:52:11.600100 kernel: raid6: sse2x4 gen() 5184 MB/s Jan 13 21:52:11.618067 kernel: raid6: sse2x2 gen() 6445 MB/s Jan 13 21:52:11.636469 kernel: raid6: sse2x1 gen() 9651 MB/s Jan 13 21:52:11.636543 kernel: raid6: using algorithm sse2x1 gen() 9651 MB/s Jan 13 21:52:11.655053 kernel: raid6: .... xor() 6216 MB/s, rmw enabled Jan 13 21:52:11.655118 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 21:52:11.681651 kernel: xor: measuring software checksum speed Jan 13 21:52:11.681781 kernel: prefetch64-sse : 17156 MB/sec Jan 13 21:52:11.681811 kernel: generic_sse : 15733 MB/sec Jan 13 21:52:11.683368 kernel: xor: using function: prefetch64-sse (17156 MB/sec) Jan 13 21:52:11.878030 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:52:11.895675 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:52:11.903098 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:52:11.915910 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 13 21:52:11.920204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:52:11.929218 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:52:11.946710 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 13 21:52:11.985922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:52:11.990163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:52:12.046508 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:52:12.053229 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:52:12.073805 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:52:12.075850 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:52:12.077596 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:52:12.078162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:52:12.084694 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:52:12.101725 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:52:12.137979 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 13 21:52:12.165082 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 13 21:52:12.165207 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:52:12.165221 kernel: GPT:17805311 != 20971519 Jan 13 21:52:12.165233 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:52:12.165244 kernel: GPT:17805311 != 20971519 Jan 13 21:52:12.165255 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:52:12.165265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:52:12.141745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:52:12.141901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:52:12.142715 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:52:12.146415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:52:12.146623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:52:12.147300 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:52:12.166544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:52:12.176977 kernel: libata version 3.00 loaded. Jan 13 21:52:12.187179 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 13 21:52:12.205092 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jan 13 21:52:12.205110 kernel: scsi host0: ata_piix Jan 13 21:52:12.205239 kernel: scsi host1: ata_piix Jan 13 21:52:12.205351 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 13 21:52:12.205372 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 13 21:52:12.207662 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:52:12.251058 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (455) Jan 13 21:52:12.252750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:52:12.276475 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:52:12.297353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:52:12.302427 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:52:12.303112 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:52:12.309097 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:52:12.312100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:52:12.324043 disk-uuid[504]: Primary Header is updated. Jan 13 21:52:12.324043 disk-uuid[504]: Secondary Entries is updated. Jan 13 21:52:12.324043 disk-uuid[504]: Secondary Header is updated. Jan 13 21:52:12.334026 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:52:12.337169 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:52:12.340040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:52:13.355025 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:52:13.357614 disk-uuid[507]: The operation has completed successfully. Jan 13 21:52:13.585423 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:52:13.585677 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:52:13.606273 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:52:13.639217 sh[528]: Success Jan 13 21:52:13.672080 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 13 21:52:13.780258 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:52:13.793189 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:52:13.799905 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:52:13.843187 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:52:13.843292 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:52:13.847725 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:52:13.852561 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:52:13.856156 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:52:14.071120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:52:14.073507 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:52:14.085412 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:52:14.091233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:52:14.110097 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:52:14.110166 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:52:14.114342 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:52:14.127025 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:52:14.151596 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:52:14.157663 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:52:14.177540 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:52:14.186253 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:52:14.257296 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:52:14.263282 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:52:14.285605 systemd-networkd[710]: lo: Link UP Jan 13 21:52:14.285617 systemd-networkd[710]: lo: Gained carrier Jan 13 21:52:14.286823 systemd-networkd[710]: Enumeration completed Jan 13 21:52:14.287286 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:52:14.287400 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:52:14.287404 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:52:14.288853 systemd-networkd[710]: eth0: Link UP Jan 13 21:52:14.288856 systemd-networkd[710]: eth0: Gained carrier Jan 13 21:52:14.288863 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:52:14.290265 systemd[1]: Reached target network.target - Network. Jan 13 21:52:14.297998 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.15/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:52:14.786671 ignition[633]: Ignition 2.19.0 Jan 13 21:52:14.787365 ignition[633]: Stage: fetch-offline Jan 13 21:52:14.787498 ignition[633]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:52:14.787525 ignition[633]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:52:14.791163 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:52:14.787782 ignition[633]: parsed url from cmdline: "" Jan 13 21:52:14.787791 ignition[633]: no config URL provided Jan 13 21:52:14.787805 ignition[633]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:52:14.787826 ignition[633]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:52:14.787838 ignition[633]: failed to fetch config: resource requires networking Jan 13 21:52:14.788340 ignition[633]: Ignition finished successfully Jan 13 21:52:14.804374 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:52:14.835479 ignition[721]: Ignition 2.19.0 Jan 13 21:52:14.835517 ignition[721]: Stage: fetch Jan 13 21:52:14.835916 ignition[721]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:52:14.835943 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:52:14.836231 ignition[721]: parsed url from cmdline: "" Jan 13 21:52:14.836240 ignition[721]: no config URL provided Jan 13 21:52:14.836253 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:52:14.836274 ignition[721]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:52:14.836536 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 21:52:14.836666 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 21:52:14.836728 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 21:52:15.029363 ignition[721]: GET result: OK Jan 13 21:52:15.029524 ignition[721]: parsing config with SHA512: cd62a192384c395e080ca39911711e1af5632389bc5d48959cb296c23716a3e5361b426fcab776818de6ba6d70b6493b371ee767ad9dc2b6d95375d95df54acb Jan 13 21:52:15.040185 unknown[721]: fetched base config from "system" Jan 13 21:52:15.040211 unknown[721]: fetched base config from "system" Jan 13 21:52:15.041163 ignition[721]: fetch: fetch complete Jan 13 21:52:15.040228 unknown[721]: fetched user config from "openstack" Jan 13 21:52:15.041175 ignition[721]: fetch: fetch passed Jan 13 21:52:15.044801 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:52:15.041267 ignition[721]: Ignition finished successfully Jan 13 21:52:15.056360 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:52:15.089362 ignition[727]: Ignition 2.19.0 Jan 13 21:52:15.089395 ignition[727]: Stage: kargs Jan 13 21:52:15.089799 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:52:15.089827 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:52:15.095156 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:52:15.092267 ignition[727]: kargs: kargs passed Jan 13 21:52:15.092375 ignition[727]: Ignition finished successfully Jan 13 21:52:15.106326 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:52:15.147605 ignition[734]: Ignition 2.19.0 Jan 13 21:52:15.149369 ignition[734]: Stage: disks Jan 13 21:52:15.149797 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:52:15.149824 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:52:15.156617 ignition[734]: disks: disks passed Jan 13 21:52:15.156715 ignition[734]: Ignition finished successfully Jan 13 21:52:15.158836 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:52:15.161234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:52:15.162732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:52:15.164886 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:52:15.166823 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:52:15.168646 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:52:15.176242 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:52:15.213253 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:52:15.224995 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:52:15.234228 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:52:15.406011 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:52:15.406999 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:52:15.408183 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:52:15.417169 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:52:15.420575 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:52:15.424360 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:52:15.427230 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 21:52:15.427847 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:52:15.427873 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:52:15.429485 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:52:15.439145 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:52:15.550184 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (750) Jan 13 21:52:15.583047 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:52:15.588934 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:52:15.589061 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:52:15.679073 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:52:15.751256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:52:16.239427 systemd-networkd[710]: eth0: Gained IPv6LL Jan 13 21:52:16.266421 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:52:16.281011 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:52:16.293693 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:52:16.311647 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:52:16.452694 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:52:16.464092 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:52:16.467103 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:52:16.484983 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:52:16.482561 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:52:16.527830 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:52:16.542945 ignition[867]: INFO : Ignition 2.19.0 Jan 13 21:52:16.542945 ignition[867]: INFO : Stage: mount Jan 13 21:52:16.544264 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:52:16.544264 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:52:16.546532 ignition[867]: INFO : mount: mount passed Jan 13 21:52:16.546532 ignition[867]: INFO : Ignition finished successfully Jan 13 21:52:16.546323 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:52:23.363437 coreos-metadata[752]: Jan 13 21:52:23.363 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:52:23.403609 coreos-metadata[752]: Jan 13 21:52:23.403 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:52:23.419457 coreos-metadata[752]: Jan 13 21:52:23.419 INFO Fetch successful Jan 13 21:52:23.420876 coreos-metadata[752]: Jan 13 21:52:23.420 INFO wrote hostname ci-4081-3-0-2-4850f65211.novalocal to /sysroot/etc/hostname Jan 13 21:52:23.423738 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 21:52:23.424063 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 21:52:23.437156 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:52:23.464434 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:52:23.482091 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) Jan 13 21:52:23.489576 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:52:23.489647 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:52:23.493819 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:52:23.505021 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:52:23.509598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:52:23.556219 ignition[901]: INFO : Ignition 2.19.0 Jan 13 21:52:23.556219 ignition[901]: INFO : Stage: files Jan 13 21:52:23.559605 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:52:23.559605 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:52:23.559605 ignition[901]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:52:23.565028 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:52:23.565028 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:52:23.569055 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:52:23.569055 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:52:23.573086 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:52:23.573086 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:52:23.573086 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:52:23.569215 unknown[901]: wrote ssh authorized keys file for user: core Jan 13 21:52:23.658478 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:52:24.257072 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:52:24.257072 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:52:24.262031 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:52:24.797039 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:52:25.636553 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:52:25.638192 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:52:25.638192 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:52:25.638192 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:52:25.638192 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:52:25.638192 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:52:25.638192 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:52:25.638192 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:52:25.653037 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:52:25.653037 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:52:25.653037 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:52:25.653037 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:52:25.653037 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:52:25.653037 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:52:25.653037 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:52:26.236252 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:52:29.079012 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:52:29.079012 ignition[901]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:52:29.084488 ignition[901]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:52:29.084488 ignition[901]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:52:29.084488 ignition[901]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:52:29.084488 ignition[901]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:52:29.084488 ignition[901]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:52:29.084488 ignition[901]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:52:29.084488 ignition[901]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:52:29.084488 ignition[901]: INFO : files: files passed Jan 13 21:52:29.084488 ignition[901]: INFO : Ignition finished successfully Jan 13 21:52:29.083326 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:52:29.093173 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:52:29.100114 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:52:29.101140 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:52:29.101221 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:52:29.127759 initrd-setup-root-after-ignition[929]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:52:29.127759 initrd-setup-root-after-ignition[929]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:52:29.132238 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:52:29.130213 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:52:29.133245 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:52:29.142149 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:52:29.181730 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:52:29.181946 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:52:29.184307 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:52:29.186388 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:52:29.188376 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:52:29.195246 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:52:29.211641 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:52:29.220262 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:52:29.230200 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:52:29.230881 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:52:29.231626 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:52:29.233853 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:52:29.233991 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:52:29.236798 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:52:29.237910 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:52:29.239810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:52:29.242088 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:52:29.243985 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:52:29.245807 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:52:29.248002 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:52:29.250233 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:52:29.252402 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:52:29.254528 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:52:29.256684 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:52:29.256798 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:52:29.259620 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:52:29.260787 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:52:29.262453 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:52:29.263045 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:52:29.264448 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:52:29.264562 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:52:29.267756 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:52:29.267882 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:52:29.268947 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:52:29.269083 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:52:29.281421 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:52:29.281993 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:52:29.282164 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:52:29.285188 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:52:29.285699 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:52:29.285867 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:52:29.286644 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:52:29.286843 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:52:29.294404 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:52:29.294491 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:52:29.303985 ignition[953]: INFO : Ignition 2.19.0 Jan 13 21:52:29.303985 ignition[953]: INFO : Stage: umount Jan 13 21:52:29.307005 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:52:29.307005 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 21:52:29.307005 ignition[953]: INFO : umount: umount passed Jan 13 21:52:29.307005 ignition[953]: INFO : Ignition finished successfully Jan 13 21:52:29.308727 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:52:29.308836 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:52:29.311480 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:52:29.311523 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:52:29.313229 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:52:29.313273 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:52:29.314320 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:52:29.314375 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:52:29.316187 systemd[1]: Stopped target network.target - Network. Jan 13 21:52:29.316631 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:52:29.316693 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:52:29.317238 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:52:29.317675 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:52:29.320157 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:52:29.321031 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:52:29.322018 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:52:29.323068 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:52:29.323105 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:52:29.324224 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:52:29.324258 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:52:29.325379 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:52:29.325422 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:52:29.326396 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:52:29.326437 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:52:29.327685 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:52:29.329326 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:52:29.332045 systemd-networkd[710]: eth0: DHCPv6 lease lost Jan 13 21:52:29.333483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:52:29.334232 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:52:29.334328 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:52:29.336549 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:52:29.336838 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:52:29.338030 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:52:29.338136 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:52:29.339781 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:52:29.340127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:52:29.341314 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:52:29.341359 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:52:29.349104 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:52:29.352633 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:52:29.352712 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:52:29.354004 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:52:29.354278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:52:29.356264 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:52:29.356398 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:52:29.358066 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:52:29.358167 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:52:29.360326 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:52:29.368669 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:52:29.368819 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:52:29.369891 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:52:29.369944 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:52:29.373233 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:52:29.373916 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:52:29.375178 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:52:29.375741 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:52:29.377034 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:52:29.377074 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:52:29.378124 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:52:29.378166 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:52:29.381105 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:52:29.381673 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:52:29.381727 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:52:29.384904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:52:29.384983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:52:29.386429 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:52:29.387210 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:52:29.394679 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:52:29.394783 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:52:29.396309 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:52:29.405411 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:52:29.413059 systemd[1]: Switching root. Jan 13 21:52:29.442109 systemd-journald[184]: Journal stopped Jan 13 21:52:31.309651 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 21:52:31.309725 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:52:31.309741 kernel: SELinux: policy capability open_perms=1 Jan 13 21:52:31.309752 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:52:31.309763 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:52:31.309777 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:52:31.309789 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:52:31.309804 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:52:31.309815 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:52:31.309832 systemd[1]: Successfully loaded SELinux policy in 72.242ms. Jan 13 21:52:31.309857 kernel: audit: type=1403 audit(1736805150.209:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:52:31.309870 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.840ms. Jan 13 21:52:31.309883 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:52:31.309896 systemd[1]: Detected virtualization kvm. Jan 13 21:52:31.309910 systemd[1]: Detected architecture x86-64. Jan 13 21:52:31.309922 systemd[1]: Detected first boot. Jan 13 21:52:31.309934 systemd[1]: Hostname set to . Jan 13 21:52:31.309949 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:52:31.309976 zram_generator::config[995]: No configuration found. Jan 13 21:52:31.309995 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:52:31.310007 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:52:31.310022 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:52:31.310034 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:52:31.310047 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:52:31.310059 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:52:31.310071 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:52:31.310083 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:52:31.310096 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:52:31.310109 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:52:31.310121 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:52:31.310136 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:52:31.310150 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:52:31.310162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:52:31.310175 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:52:31.310186 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:52:31.310198 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:52:31.310211 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:52:31.310223 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:52:31.310235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:52:31.310249 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:52:31.310262 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:52:31.310274 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:52:31.310286 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:52:31.310298 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:52:31.310316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:52:31.310331 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:52:31.310343 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:52:31.310355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:52:31.310367 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:52:31.310380 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:52:31.310392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:52:31.310404 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:52:31.310417 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:52:31.310429 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:52:31.310443 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:52:31.310455 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:52:31.310467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:52:31.310479 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:52:31.310491 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:52:31.310503 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:52:31.310516 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:52:31.310528 systemd[1]: Reached target machines.target - Containers. Jan 13 21:52:31.310540 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:52:31.310554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:52:31.310567 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:52:31.310579 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:52:31.310590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:52:31.310602 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:52:31.310614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:52:31.310626 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:52:31.310638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:52:31.310652 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:52:31.310664 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:52:31.310677 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:52:31.310689 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:52:31.310701 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:52:31.310713 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:52:31.310725 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:52:31.310736 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:52:31.310748 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:52:31.310763 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:52:31.310775 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:52:31.310797 systemd[1]: Stopped verity-setup.service. Jan 13 21:52:31.310811 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:52:31.310824 kernel: ACPI: bus type drm_connector registered Jan 13 21:52:31.310835 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:52:31.310847 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:52:31.310859 kernel: loop: module loaded Jan 13 21:52:31.310873 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:52:31.310885 kernel: fuse: init (API version 7.39) Jan 13 21:52:31.310897 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:52:31.310911 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:52:31.310923 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:52:31.310939 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:52:31.310994 systemd-journald[1091]: Collecting audit messages is disabled. Jan 13 21:52:31.311027 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:52:31.311041 systemd-journald[1091]: Journal started Jan 13 21:52:31.311067 systemd-journald[1091]: Runtime Journal (/run/log/journal/4cc2a4c0cd98424d89e8572c9ffdd8ea) is 8.0M, max 78.3M, 70.3M free. Jan 13 21:52:30.933439 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:52:30.959793 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:52:30.960223 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:52:31.315231 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:52:31.316333 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:52:31.316599 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:52:31.317421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:52:31.317656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:52:31.318531 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:52:31.318776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:52:31.319633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:52:31.319880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:52:31.320740 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:52:31.321100 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:52:31.321871 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:52:31.322291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:52:31.323125 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:52:31.323944 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:52:31.324793 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:52:31.336148 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:52:31.344084 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:52:31.350028 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:52:31.351051 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:52:31.351221 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:52:31.353669 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:52:31.358094 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:52:31.366153 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:52:31.367269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:52:31.372111 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:52:31.374094 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:52:31.374665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:52:31.378057 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:52:31.379057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:52:31.381095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:52:31.387094 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:52:31.391822 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:52:31.394727 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:52:31.396439 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:52:31.397375 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:52:31.411083 systemd-journald[1091]: Time spent on flushing to /var/log/journal/4cc2a4c0cd98424d89e8572c9ffdd8ea is 50.451ms for 948 entries. Jan 13 21:52:31.411083 systemd-journald[1091]: System Journal (/var/log/journal/4cc2a4c0cd98424d89e8572c9ffdd8ea) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:52:31.501145 systemd-journald[1091]: Received client request to flush runtime journal. Jan 13 21:52:31.501189 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 21:52:31.417380 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:52:31.418116 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:52:31.422113 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:52:31.442893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:52:31.462310 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:52:31.475654 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:52:31.476506 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:52:31.481287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:52:31.486135 udevadm[1138]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:52:31.505318 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:52:31.742778 systemd-tmpfiles[1140]: ACLs are not supported, ignoring. Jan 13 21:52:31.742852 systemd-tmpfiles[1140]: ACLs are not supported, ignoring. Jan 13 21:52:31.752093 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:52:31.874477 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:52:31.880392 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:52:31.940236 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:52:31.967080 kernel: loop1: detected capacity change from 0 to 210664 Jan 13 21:52:32.030457 kernel: loop2: detected capacity change from 0 to 8 Jan 13 21:52:32.052995 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:52:32.164013 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:52:32.240417 kernel: loop5: detected capacity change from 0 to 210664 Jan 13 21:52:32.305562 kernel: loop6: detected capacity change from 0 to 8 Jan 13 21:52:32.313058 kernel: loop7: detected capacity change from 0 to 142488 Jan 13 21:52:32.376360 (sd-merge)[1155]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 21:52:32.377192 (sd-merge)[1155]: Merged extensions into '/usr'. Jan 13 21:52:32.389710 systemd[1]: Reloading requested from client PID 1128 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:52:32.390156 systemd[1]: Reloading... Jan 13 21:52:32.493408 zram_generator::config[1177]: No configuration found. Jan 13 21:52:32.685129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:52:32.703464 ldconfig[1123]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:52:32.743042 systemd[1]: Reloading finished in 352 ms. Jan 13 21:52:32.772616 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:52:32.773567 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:52:32.774457 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:52:32.784115 systemd[1]: Starting ensure-sysext.service... Jan 13 21:52:32.786134 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:52:32.790267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:52:32.814065 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:52:32.814082 systemd[1]: Reloading... Jan 13 21:52:32.831168 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:52:32.831522 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:52:32.834933 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:52:32.837309 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 13 21:52:32.837380 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 13 21:52:32.844947 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:52:32.846004 systemd-tmpfiles[1239]: Skipping /boot Jan 13 21:52:32.862155 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:52:32.862172 systemd-tmpfiles[1239]: Skipping /boot Jan 13 21:52:32.864817 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jan 13 21:52:32.899997 zram_generator::config[1267]: No configuration found. Jan 13 21:52:33.028999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:52:33.036388 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 13 21:52:33.041346 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:52:33.067022 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1278) Jan 13 21:52:33.108984 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:52:33.147983 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:52:33.177990 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 13 21:52:33.175929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:52:33.179994 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 13 21:52:33.185290 kernel: Console: switching to colour dummy device 80x25 Jan 13 21:52:33.185390 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 21:52:33.185413 kernel: [drm] features: -context_init Jan 13 21:52:33.188087 kernel: [drm] number of scanouts: 1 Jan 13 21:52:33.188132 kernel: [drm] number of cap sets: 0 Jan 13 21:52:33.192000 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 13 21:52:33.203573 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 13 21:52:33.203685 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 21:52:33.211996 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 21:52:33.250260 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:52:33.250544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:52:33.253720 systemd[1]: Reloading finished in 439 ms. Jan 13 21:52:33.269214 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:52:33.276338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:52:33.309928 systemd[1]: Finished ensure-sysext.service. Jan 13 21:52:33.314135 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:52:33.321190 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:52:33.326101 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:52:33.326320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:52:33.328422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:52:33.331043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:52:33.333295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:52:33.339158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:52:33.341121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:52:33.344036 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:52:33.345225 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:52:33.350087 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:52:33.354250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:52:33.359141 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:52:33.360383 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:52:33.362044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:52:33.363075 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:52:33.363707 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:52:33.367141 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:52:33.390688 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:52:33.408222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:52:33.408389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:52:33.408611 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:52:33.410531 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:52:33.410907 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:52:33.417101 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:52:33.418039 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:52:33.427293 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:52:33.437508 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:52:33.438191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:52:33.438365 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:52:33.442603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:52:33.459686 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:52:33.476804 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:52:33.480555 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:52:33.491533 augenrules[1398]: No rules Jan 13 21:52:33.492054 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:52:33.494637 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:52:33.500690 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:52:33.518378 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:52:33.520024 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:52:33.528333 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:52:33.544606 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:52:33.547794 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:52:33.582365 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:52:33.590864 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:52:33.606026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:52:33.628392 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:52:33.633108 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:52:33.640844 systemd-networkd[1366]: lo: Link UP Jan 13 21:52:33.640854 systemd-networkd[1366]: lo: Gained carrier Jan 13 21:52:33.642127 systemd-networkd[1366]: Enumeration completed Jan 13 21:52:33.642205 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:52:33.646071 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:52:33.646082 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:52:33.646817 systemd-networkd[1366]: eth0: Link UP Jan 13 21:52:33.646827 systemd-networkd[1366]: eth0: Gained carrier Jan 13 21:52:33.646841 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:52:33.652219 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:52:33.663353 systemd-resolved[1367]: Positive Trust Anchors: Jan 13 21:52:33.663688 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:52:33.663741 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:52:33.665039 systemd-networkd[1366]: eth0: DHCPv4 address 172.24.4.15/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 13 21:52:33.666419 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Jan 13 21:52:33.670678 systemd-resolved[1367]: Using system hostname 'ci-4081-3-0-2-4850f65211.novalocal'. Jan 13 21:52:33.672294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:52:33.673539 systemd[1]: Reached target network.target - Network. Jan 13 21:52:33.675211 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:52:33.677671 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:52:33.679843 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:52:33.681988 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:52:33.684288 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:52:33.686447 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:52:33.688507 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:52:33.690637 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:52:33.690666 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:52:33.692753 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:52:33.695320 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:52:33.698872 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:52:33.710697 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:52:33.717985 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:52:33.718811 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:52:33.721366 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:52:33.722042 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:52:33.722079 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:52:33.737087 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:52:33.741307 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:52:33.750147 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:52:33.765122 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:52:33.769783 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:52:33.771587 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:52:33.778429 jq[1429]: false Jan 13 21:52:33.780276 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:52:33.789291 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:52:33.798173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:52:33.803406 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:52:33.815223 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:52:33.818602 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:52:33.821368 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:52:33.822742 dbus-daemon[1428]: [system] SELinux support is enabled Jan 13 21:52:33.823123 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:52:33.828320 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:52:33.831837 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:52:33.845137 extend-filesystems[1430]: Found loop4 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found loop5 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found loop6 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found loop7 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found vda Jan 13 21:52:33.845137 extend-filesystems[1430]: Found vda1 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found vda2 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found vda3 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found usr Jan 13 21:52:33.845137 extend-filesystems[1430]: Found vda4 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found vda6 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found vda7 Jan 13 21:52:33.845137 extend-filesystems[1430]: Found vda9 Jan 13 21:52:33.845137 extend-filesystems[1430]: Checking size of /dev/vda9 Jan 13 21:52:33.850459 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:52:33.890007 jq[1444]: true Jan 13 21:52:33.850629 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:52:33.890320 update_engine[1443]: I20250113 21:52:33.876003 1443 main.cc:92] Flatcar Update Engine starting Jan 13 21:52:33.850934 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:52:33.851124 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:52:33.868346 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:52:33.868519 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:52:33.901012 update_engine[1443]: I20250113 21:52:33.897688 1443 update_check_scheduler.cc:74] Next update check in 2m2s Jan 13 21:52:33.903368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:52:33.903406 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:52:33.905806 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:52:33.905824 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:52:33.906499 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:52:33.909793 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:52:33.915658 jq[1452]: true Jan 13 21:52:33.928357 extend-filesystems[1430]: Resized partition /dev/vda9 Jan 13 21:52:33.919138 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:52:33.932800 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:52:33.948904 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 13 21:52:33.951433 tar[1448]: linux-amd64/helm Jan 13 21:52:33.958949 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 13 21:52:34.015116 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1286) Jan 13 21:52:34.024344 systemd-logind[1442]: New seat seat0. Jan 13 21:52:34.032694 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:52:34.037598 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:52:34.037598 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:52:34.037598 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 13 21:52:34.032721 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:52:34.041912 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Jan 13 21:52:34.032970 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:52:34.044671 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:52:34.044862 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:52:34.074989 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:52:34.081711 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:52:34.102768 systemd[1]: Starting sshkeys.service... Jan 13 21:52:34.145234 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:52:34.156441 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:52:34.245359 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:52:34.404737 containerd[1453]: time="2025-01-13T21:52:34.404599721Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:52:34.471822 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:52:34.477979 containerd[1453]: time="2025-01-13T21:52:34.477896629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:52:34.481866 containerd[1453]: time="2025-01-13T21:52:34.481831459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:52:34.483058 containerd[1453]: time="2025-01-13T21:52:34.482684669Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:52:34.483058 containerd[1453]: time="2025-01-13T21:52:34.482722260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:52:34.483058 containerd[1453]: time="2025-01-13T21:52:34.482910793Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:52:34.483058 containerd[1453]: time="2025-01-13T21:52:34.482931933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:52:34.483258 containerd[1453]: time="2025-01-13T21:52:34.483235763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:52:34.483322 containerd[1453]: time="2025-01-13T21:52:34.483307117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:52:34.483592 containerd[1453]: time="2025-01-13T21:52:34.483564349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:52:34.484007 containerd[1453]: time="2025-01-13T21:52:34.483987543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:52:34.484084 containerd[1453]: time="2025-01-13T21:52:34.484066311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:52:34.484216 containerd[1453]: time="2025-01-13T21:52:34.484201053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:52:34.484367 containerd[1453]: time="2025-01-13T21:52:34.484347759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:52:34.484931 containerd[1453]: time="2025-01-13T21:52:34.484912448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:52:34.485366 containerd[1453]: time="2025-01-13T21:52:34.485345680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:52:34.485891 containerd[1453]: time="2025-01-13T21:52:34.485634031Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:52:34.485891 containerd[1453]: time="2025-01-13T21:52:34.485725953Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:52:34.485891 containerd[1453]: time="2025-01-13T21:52:34.485778241Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:52:34.493614 containerd[1453]: time="2025-01-13T21:52:34.493159876Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:52:34.493614 containerd[1453]: time="2025-01-13T21:52:34.493210981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:52:34.493614 containerd[1453]: time="2025-01-13T21:52:34.493231770Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:52:34.493614 containerd[1453]: time="2025-01-13T21:52:34.493251377Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:52:34.493614 containerd[1453]: time="2025-01-13T21:52:34.493267518Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:52:34.493614 containerd[1453]: time="2025-01-13T21:52:34.493395858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:52:34.495155 containerd[1453]: time="2025-01-13T21:52:34.495036375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495272027Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495313304Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495342058Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495371423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495403814Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495426366Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495457355Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495485497Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495527075Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495555589Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495578993Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495626061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495654875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496176 containerd[1453]: time="2025-01-13T21:52:34.495684541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495710079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495731980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495754662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495781803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495809996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495829352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495868055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495892942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495914372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.496488 containerd[1453]: time="2025-01-13T21:52:34.495934680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.499621 containerd[1453]: time="2025-01-13T21:52:34.499588082Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:52:34.499668 containerd[1453]: time="2025-01-13T21:52:34.499638055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.499692 containerd[1453]: time="2025-01-13T21:52:34.499668713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.499719 containerd[1453]: time="2025-01-13T21:52:34.499695253Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:52:34.499999 containerd[1453]: time="2025-01-13T21:52:34.499756638Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:52:34.499999 containerd[1453]: time="2025-01-13T21:52:34.499783238Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:52:34.499999 containerd[1453]: time="2025-01-13T21:52:34.499800941Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:52:34.499999 containerd[1453]: time="2025-01-13T21:52:34.499824395Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:52:34.499999 containerd[1453]: time="2025-01-13T21:52:34.499847198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.499999 containerd[1453]: time="2025-01-13T21:52:34.499867125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:52:34.499999 containerd[1453]: time="2025-01-13T21:52:34.499880170Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:52:34.499999 containerd[1453]: time="2025-01-13T21:52:34.499897161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:52:34.501507 containerd[1453]: time="2025-01-13T21:52:34.501016501Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:52:34.501507 containerd[1453]: time="2025-01-13T21:52:34.501114775Z" level=info msg="Connect containerd service" Jan 13 21:52:34.501507 containerd[1453]: time="2025-01-13T21:52:34.501170950Z" level=info msg="using legacy CRI server" Jan 13 21:52:34.501507 containerd[1453]: time="2025-01-13T21:52:34.501181270Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:52:34.501507 containerd[1453]: time="2025-01-13T21:52:34.501305743Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:52:34.502077 containerd[1453]: time="2025-01-13T21:52:34.501874871Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502179031Z" level=info msg="Start subscribing containerd event" Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502228754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502251196Z" level=info msg="Start recovering state" Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502275342Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502331176Z" level=info msg="Start event monitor" Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502351114Z" level=info msg="Start snapshots syncer" Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502364850Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502374448Z" level=info msg="Start streaming server" Jan 13 21:52:34.502607 containerd[1453]: time="2025-01-13T21:52:34.502442666Z" level=info msg="containerd successfully booted in 0.100715s" Jan 13 21:52:34.503011 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:52:34.508599 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:52:34.519351 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:52:34.531473 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:52:34.531712 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:52:34.541802 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:52:34.560479 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:52:34.570922 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:52:34.574377 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:52:34.577945 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:52:34.699296 tar[1448]: linux-amd64/LICENSE Jan 13 21:52:34.699392 tar[1448]: linux-amd64/README.md Jan 13 21:52:34.714106 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:52:34.735165 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 13 21:52:34.735712 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Jan 13 21:52:34.740130 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:52:34.745353 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:52:34.758551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:52:34.774683 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:52:34.825561 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:52:36.655265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:52:36.668838 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:52:36.747603 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:52:36.761416 systemd[1]: Started sshd@0-172.24.4.15:22-172.24.4.1:46758.service - OpenSSH per-connection server daemon (172.24.4.1:46758). Jan 13 21:52:38.032918 sshd[1542]: Accepted publickey for core from 172.24.4.1 port 46758 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:38.038152 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:38.057877 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:52:38.068336 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:52:38.076725 systemd-logind[1442]: New session 1 of user core. Jan 13 21:52:38.087706 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:52:38.100096 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:52:38.107102 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:52:38.176439 kubelet[1540]: E0113 21:52:38.176395 1540 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:52:38.179256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:52:38.179395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:52:38.180140 systemd[1]: kubelet.service: Consumed 2.188s CPU time. Jan 13 21:52:38.230498 systemd[1554]: Queued start job for default target default.target. Jan 13 21:52:38.244910 systemd[1554]: Created slice app.slice - User Application Slice. Jan 13 21:52:38.244939 systemd[1554]: Reached target paths.target - Paths. Jan 13 21:52:38.244954 systemd[1554]: Reached target timers.target - Timers. Jan 13 21:52:38.246340 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:52:38.269941 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:52:38.271663 systemd[1554]: Reached target sockets.target - Sockets. Jan 13 21:52:38.271700 systemd[1554]: Reached target basic.target - Basic System. Jan 13 21:52:38.271780 systemd[1554]: Reached target default.target - Main User Target. Jan 13 21:52:38.271828 systemd[1554]: Startup finished in 158ms. Jan 13 21:52:38.271918 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:52:38.280278 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:52:38.846691 systemd[1]: Started sshd@1-172.24.4.15:22-172.24.4.1:46760.service - OpenSSH per-connection server daemon (172.24.4.1:46760). Jan 13 21:52:39.621080 login[1517]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:52:39.636400 systemd-logind[1442]: New session 2 of user core. Jan 13 21:52:39.643842 login[1518]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 21:52:39.646173 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:52:39.665098 systemd-logind[1442]: New session 3 of user core. Jan 13 21:52:39.670544 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:52:40.732000 sshd[1566]: Accepted publickey for core from 172.24.4.1 port 46760 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:40.735071 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:40.744444 systemd-logind[1442]: New session 4 of user core. Jan 13 21:52:40.758523 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:52:40.831377 coreos-metadata[1425]: Jan 13 21:52:40.831 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:52:40.880295 coreos-metadata[1425]: Jan 13 21:52:40.880 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 21:52:41.177413 coreos-metadata[1425]: Jan 13 21:52:41.177 INFO Fetch successful Jan 13 21:52:41.177413 coreos-metadata[1425]: Jan 13 21:52:41.177 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 21:52:41.190541 coreos-metadata[1425]: Jan 13 21:52:41.190 INFO Fetch successful Jan 13 21:52:41.190541 coreos-metadata[1425]: Jan 13 21:52:41.190 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 21:52:41.203528 coreos-metadata[1425]: Jan 13 21:52:41.203 INFO Fetch successful Jan 13 21:52:41.203528 coreos-metadata[1425]: Jan 13 21:52:41.203 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 21:52:41.215032 coreos-metadata[1425]: Jan 13 21:52:41.214 INFO Fetch successful Jan 13 21:52:41.215032 coreos-metadata[1425]: Jan 13 21:52:41.215 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 21:52:41.227638 coreos-metadata[1425]: Jan 13 21:52:41.227 INFO Fetch successful Jan 13 21:52:41.227638 coreos-metadata[1425]: Jan 13 21:52:41.227 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 21:52:41.238313 coreos-metadata[1425]: Jan 13 21:52:41.238 INFO Fetch successful Jan 13 21:52:41.258225 coreos-metadata[1490]: Jan 13 21:52:41.258 WARN failed to locate config-drive, using the metadata service API instead Jan 13 21:52:41.286662 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:52:41.287821 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:52:41.301607 coreos-metadata[1490]: Jan 13 21:52:41.301 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 21:52:41.314143 coreos-metadata[1490]: Jan 13 21:52:41.314 INFO Fetch successful Jan 13 21:52:41.314143 coreos-metadata[1490]: Jan 13 21:52:41.314 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:52:41.324910 coreos-metadata[1490]: Jan 13 21:52:41.324 INFO Fetch successful Jan 13 21:52:41.329297 unknown[1490]: wrote ssh authorized keys file for user: core Jan 13 21:52:41.333370 sshd[1566]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:41.358279 systemd[1]: sshd@1-172.24.4.15:22-172.24.4.1:46760.service: Deactivated successfully. Jan 13 21:52:41.363866 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:52:41.367706 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:52:41.378322 update-ssh-keys[1604]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:52:41.379634 systemd[1]: Started sshd@2-172.24.4.15:22-172.24.4.1:46764.service - OpenSSH per-connection server daemon (172.24.4.1:46764). Jan 13 21:52:41.382833 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:52:41.388245 systemd[1]: Finished sshkeys.service. Jan 13 21:52:41.396209 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:52:41.396652 systemd[1]: Startup finished in 1.177s (kernel) + 19.417s (initrd) + 11.258s (userspace) = 31.853s. Jan 13 21:52:41.397072 systemd-logind[1442]: Removed session 4. Jan 13 21:52:42.641141 sshd[1608]: Accepted publickey for core from 172.24.4.1 port 46764 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:42.643910 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:42.655403 systemd-logind[1442]: New session 5 of user core. Jan 13 21:52:42.664243 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:52:43.392100 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:43.399372 systemd[1]: sshd@2-172.24.4.15:22-172.24.4.1:46764.service: Deactivated successfully. Jan 13 21:52:43.402553 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:52:43.404058 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:52:43.406088 systemd-logind[1442]: Removed session 5. Jan 13 21:52:48.268529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:52:48.276378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:52:48.685341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:52:48.699533 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:52:48.880548 kubelet[1624]: E0113 21:52:48.880430 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:52:48.887682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:52:48.888034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:52:53.412475 systemd[1]: Started sshd@3-172.24.4.15:22-172.24.4.1:43944.service - OpenSSH per-connection server daemon (172.24.4.1:43944). Jan 13 21:52:54.946587 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 43944 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:54.949321 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:54.960359 systemd-logind[1442]: New session 6 of user core. Jan 13 21:52:54.964297 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:52:55.688596 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:55.699508 systemd[1]: sshd@3-172.24.4.15:22-172.24.4.1:43944.service: Deactivated successfully. Jan 13 21:52:55.702578 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:52:55.704593 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:52:55.712574 systemd[1]: Started sshd@4-172.24.4.15:22-172.24.4.1:43952.service - OpenSSH per-connection server daemon (172.24.4.1:43952). Jan 13 21:52:55.715203 systemd-logind[1442]: Removed session 6. Jan 13 21:52:57.211023 sshd[1640]: Accepted publickey for core from 172.24.4.1 port 43952 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:57.213746 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:57.223201 systemd-logind[1442]: New session 7 of user core. Jan 13 21:52:57.232289 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:52:57.820875 sshd[1640]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:57.834204 systemd[1]: sshd@4-172.24.4.15:22-172.24.4.1:43952.service: Deactivated successfully. Jan 13 21:52:57.838477 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:52:57.843507 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:52:57.850798 systemd[1]: Started sshd@5-172.24.4.15:22-172.24.4.1:43956.service - OpenSSH per-connection server daemon (172.24.4.1:43956). Jan 13 21:52:57.855380 systemd-logind[1442]: Removed session 7. Jan 13 21:52:59.018738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:52:59.031024 sshd[1647]: Accepted publickey for core from 172.24.4.1 port 43956 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:52:59.034213 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:52:59.035242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:52:59.058121 systemd-logind[1442]: New session 8 of user core. Jan 13 21:52:59.067124 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:52:59.359249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:52:59.374542 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:52:59.463247 kubelet[1658]: E0113 21:52:59.463162 1658 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:52:59.467490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:52:59.467789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:52:59.645617 sshd[1647]: pam_unix(sshd:session): session closed for user core Jan 13 21:52:59.657311 systemd[1]: sshd@5-172.24.4.15:22-172.24.4.1:43956.service: Deactivated successfully. Jan 13 21:52:59.661133 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:52:59.663082 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:52:59.675545 systemd[1]: Started sshd@6-172.24.4.15:22-172.24.4.1:43960.service - OpenSSH per-connection server daemon (172.24.4.1:43960). Jan 13 21:52:59.679387 systemd-logind[1442]: Removed session 8. Jan 13 21:53:00.909455 sshd[1670]: Accepted publickey for core from 172.24.4.1 port 43960 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:53:00.913363 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:53:00.923222 systemd-logind[1442]: New session 9 of user core. Jan 13 21:53:00.935341 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:53:01.276478 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:53:01.277863 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:53:01.297421 sudo[1673]: pam_unix(sudo:session): session closed for user root Jan 13 21:53:01.505475 sshd[1670]: pam_unix(sshd:session): session closed for user core Jan 13 21:53:01.517549 systemd[1]: sshd@6-172.24.4.15:22-172.24.4.1:43960.service: Deactivated successfully. Jan 13 21:53:01.520727 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:53:01.524535 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:53:01.532569 systemd[1]: Started sshd@7-172.24.4.15:22-172.24.4.1:43974.service - OpenSSH per-connection server daemon (172.24.4.1:43974). Jan 13 21:53:01.536119 systemd-logind[1442]: Removed session 9. Jan 13 21:53:02.903093 sshd[1678]: Accepted publickey for core from 172.24.4.1 port 43974 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:53:02.905921 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:53:02.916998 systemd-logind[1442]: New session 10 of user core. Jan 13 21:53:02.922278 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:53:03.270813 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:53:03.271527 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:53:03.278905 sudo[1682]: pam_unix(sudo:session): session closed for user root Jan 13 21:53:03.290500 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:53:03.291211 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:53:03.318558 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:53:03.324651 auditctl[1685]: No rules Jan 13 21:53:03.325195 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:53:03.325669 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:53:03.334750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:53:03.397547 augenrules[1703]: No rules Jan 13 21:53:03.399290 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:53:03.402321 sudo[1681]: pam_unix(sudo:session): session closed for user root Jan 13 21:53:03.653127 sshd[1678]: pam_unix(sshd:session): session closed for user core Jan 13 21:53:03.664445 systemd[1]: sshd@7-172.24.4.15:22-172.24.4.1:43974.service: Deactivated successfully. Jan 13 21:53:03.668089 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:53:03.669833 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:53:03.682655 systemd[1]: Started sshd@8-172.24.4.15:22-172.24.4.1:56272.service - OpenSSH per-connection server daemon (172.24.4.1:56272). Jan 13 21:53:03.685363 systemd-logind[1442]: Removed session 10. Jan 13 21:53:04.872318 sshd[1711]: Accepted publickey for core from 172.24.4.1 port 56272 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:53:04.875204 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:53:04.886420 systemd-logind[1442]: New session 11 of user core. Jan 13 21:53:04.894266 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:53:05.008792 systemd-timesyncd[1370]: Contacted time server 51.178.79.86:123 (2.flatcar.pool.ntp.org). Jan 13 21:53:05.008896 systemd-timesyncd[1370]: Initial clock synchronization to Mon 2025-01-13 21:53:05.395424 UTC. Jan 13 21:53:05.326637 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:53:05.327441 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:53:05.889285 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:53:05.903659 (dockerd)[1729]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:53:06.993509 dockerd[1729]: time="2025-01-13T21:53:06.993383649Z" level=info msg="Starting up" Jan 13 21:53:07.192805 dockerd[1729]: time="2025-01-13T21:53:07.192715053Z" level=info msg="Loading containers: start." Jan 13 21:53:07.336096 kernel: Initializing XFRM netlink socket Jan 13 21:53:07.457379 systemd-networkd[1366]: docker0: Link UP Jan 13 21:53:07.488314 dockerd[1729]: time="2025-01-13T21:53:07.488189768Z" level=info msg="Loading containers: done." Jan 13 21:53:07.515513 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4180517726-merged.mount: Deactivated successfully. Jan 13 21:53:07.520393 dockerd[1729]: time="2025-01-13T21:53:07.520207723Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:53:07.520393 dockerd[1729]: time="2025-01-13T21:53:07.520334667Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:53:07.520648 dockerd[1729]: time="2025-01-13T21:53:07.520439837Z" level=info msg="Daemon has completed initialization" Jan 13 21:53:07.566303 dockerd[1729]: time="2025-01-13T21:53:07.566023678Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:53:07.566788 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:53:09.327050 containerd[1453]: time="2025-01-13T21:53:09.326552063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:53:09.518930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:53:09.529771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:53:09.910323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:53:09.924931 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:53:10.010837 kubelet[1881]: E0113 21:53:10.010739 1881 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:53:10.013335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:53:10.013664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:53:10.332772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628750662.mount: Deactivated successfully. Jan 13 21:53:12.446843 containerd[1453]: time="2025-01-13T21:53:12.446783316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:12.448431 containerd[1453]: time="2025-01-13T21:53:12.448327656Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Jan 13 21:53:12.449268 containerd[1453]: time="2025-01-13T21:53:12.449208163Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:12.454104 containerd[1453]: time="2025-01-13T21:53:12.454055869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:12.455067 containerd[1453]: time="2025-01-13T21:53:12.455024101Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.128407462s" Jan 13 21:53:12.455067 containerd[1453]: time="2025-01-13T21:53:12.455063633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 21:53:12.481783 containerd[1453]: time="2025-01-13T21:53:12.481727642Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:53:14.911640 containerd[1453]: time="2025-01-13T21:53:14.911435087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:14.932155 containerd[1453]: time="2025-01-13T21:53:14.917760018Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Jan 13 21:53:14.932155 containerd[1453]: time="2025-01-13T21:53:14.922213000Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:14.932398 containerd[1453]: time="2025-01-13T21:53:14.932263417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:14.936724 containerd[1453]: time="2025-01-13T21:53:14.935857502Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.454075956s" Jan 13 21:53:14.937017 containerd[1453]: time="2025-01-13T21:53:14.936932808Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 21:53:14.989749 containerd[1453]: time="2025-01-13T21:53:14.989667598Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:53:16.603930 containerd[1453]: time="2025-01-13T21:53:16.603853336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:16.605362 containerd[1453]: time="2025-01-13T21:53:16.605138729Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Jan 13 21:53:16.606523 containerd[1453]: time="2025-01-13T21:53:16.606431154Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:16.611274 containerd[1453]: time="2025-01-13T21:53:16.609942602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:16.611274 containerd[1453]: time="2025-01-13T21:53:16.611149239Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.620860691s" Jan 13 21:53:16.611274 containerd[1453]: time="2025-01-13T21:53:16.611177579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 21:53:16.639407 containerd[1453]: time="2025-01-13T21:53:16.639375106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:53:18.028931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418984386.mount: Deactivated successfully. Jan 13 21:53:18.542993 containerd[1453]: time="2025-01-13T21:53:18.542902395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:18.545298 containerd[1453]: time="2025-01-13T21:53:18.545254015Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 13 21:53:18.547162 containerd[1453]: time="2025-01-13T21:53:18.546469927Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:18.549116 containerd[1453]: time="2025-01-13T21:53:18.549027688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:18.550054 containerd[1453]: time="2025-01-13T21:53:18.549871042Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.910287434s" Jan 13 21:53:18.550054 containerd[1453]: time="2025-01-13T21:53:18.549914350Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:53:18.586479 containerd[1453]: time="2025-01-13T21:53:18.586405904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:53:19.202258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422642272.mount: Deactivated successfully. Jan 13 21:53:19.556849 update_engine[1443]: I20250113 21:53:19.556053 1443 update_attempter.cc:509] Updating boot flags... Jan 13 21:53:19.602038 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2000) Jan 13 21:53:19.698071 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2000) Jan 13 21:53:20.018314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 21:53:20.026460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:53:20.623339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:53:20.625599 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:53:20.762717 kubelet[2031]: E0113 21:53:20.762671 2031 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:53:20.766748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:53:20.766921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:53:21.275002 containerd[1453]: time="2025-01-13T21:53:21.274908150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:21.276572 containerd[1453]: time="2025-01-13T21:53:21.276535748Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 13 21:53:21.278087 containerd[1453]: time="2025-01-13T21:53:21.278034823Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:21.281869 containerd[1453]: time="2025-01-13T21:53:21.281815848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:21.284089 containerd[1453]: time="2025-01-13T21:53:21.283429056Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.696964948s" Jan 13 21:53:21.284089 containerd[1453]: time="2025-01-13T21:53:21.283489178Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:53:21.309823 containerd[1453]: time="2025-01-13T21:53:21.309762114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:53:21.859257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835568588.mount: Deactivated successfully. Jan 13 21:53:21.873576 containerd[1453]: time="2025-01-13T21:53:21.873454441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:21.875236 containerd[1453]: time="2025-01-13T21:53:21.875148605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 13 21:53:21.877991 containerd[1453]: time="2025-01-13T21:53:21.876430960Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:21.881226 containerd[1453]: time="2025-01-13T21:53:21.881183517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:21.883362 containerd[1453]: time="2025-01-13T21:53:21.883292977Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 573.480038ms" Jan 13 21:53:21.883448 containerd[1453]: time="2025-01-13T21:53:21.883370878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:53:21.930943 containerd[1453]: time="2025-01-13T21:53:21.930903255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:53:22.622841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467543072.mount: Deactivated successfully. Jan 13 21:53:25.481706 containerd[1453]: time="2025-01-13T21:53:25.481567599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:25.484646 containerd[1453]: time="2025-01-13T21:53:25.484528516Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 13 21:53:25.486303 containerd[1453]: time="2025-01-13T21:53:25.486145279Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:25.494533 containerd[1453]: time="2025-01-13T21:53:25.494462250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:53:25.499023 containerd[1453]: time="2025-01-13T21:53:25.498200573Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.567027263s" Jan 13 21:53:25.499023 containerd[1453]: time="2025-01-13T21:53:25.498304926Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 21:53:29.255657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:53:29.266323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:53:29.304373 systemd[1]: Reloading requested from client PID 2175 ('systemctl') (unit session-11.scope)... Jan 13 21:53:29.304711 systemd[1]: Reloading... Jan 13 21:53:29.431013 zram_generator::config[2214]: No configuration found. Jan 13 21:53:29.596897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:53:29.680146 systemd[1]: Reloading finished in 374 ms. Jan 13 21:53:29.725331 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:53:29.725404 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:53:29.725913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:53:29.729271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:53:29.840496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:53:29.852645 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:53:30.107050 kubelet[2279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:53:30.107050 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:53:30.107050 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:53:30.107050 kubelet[2279]: I0113 21:53:30.106856 2279 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:53:30.566882 kubelet[2279]: I0113 21:53:30.566800 2279 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:53:30.566882 kubelet[2279]: I0113 21:53:30.566832 2279 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:53:30.567431 kubelet[2279]: I0113 21:53:30.567072 2279 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:53:31.059074 kubelet[2279]: I0113 21:53:31.058745 2279 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:53:31.098037 kubelet[2279]: E0113 21:53:31.097952 2279 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.260664 kubelet[2279]: I0113 21:53:31.260012 2279 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:53:31.260664 kubelet[2279]: I0113 21:53:31.260262 2279 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:53:31.260664 kubelet[2279]: I0113 21:53:31.260302 2279 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-4850f65211.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:53:31.312932 kubelet[2279]: I0113 21:53:31.311717 2279 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:53:31.312932 kubelet[2279]: I0113 21:53:31.311803 2279 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:53:31.312932 kubelet[2279]: I0113 21:53:31.312118 2279 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:53:31.349594 kubelet[2279]: I0113 21:53:31.349187 2279 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:53:31.349594 kubelet[2279]: I0113 21:53:31.349243 2279 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:53:31.349594 kubelet[2279]: I0113 21:53:31.349280 2279 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:53:31.349594 kubelet[2279]: I0113 21:53:31.349306 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:53:31.568773 kubelet[2279]: W0113 21:53:31.565905 2279 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-4850f65211.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.568773 kubelet[2279]: E0113 21:53:31.566139 2279 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-4850f65211.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.609505 kubelet[2279]: W0113 21:53:31.609083 2279 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.609505 kubelet[2279]: E0113 21:53:31.609191 2279 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.610730 kubelet[2279]: I0113 21:53:31.610320 2279 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:53:31.654589 kubelet[2279]: I0113 21:53:31.653121 2279 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:53:31.654589 kubelet[2279]: W0113 21:53:31.653245 2279 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:53:31.654589 kubelet[2279]: I0113 21:53:31.654455 2279 server.go:1264] "Started kubelet" Jan 13 21:53:31.664735 kubelet[2279]: I0113 21:53:31.664636 2279 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:53:31.668814 kubelet[2279]: I0113 21:53:31.666740 2279 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:53:31.673041 kubelet[2279]: I0113 21:53:31.671400 2279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:53:31.673041 kubelet[2279]: I0113 21:53:31.671718 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:53:31.673041 kubelet[2279]: I0113 21:53:31.671827 2279 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:53:31.673041 kubelet[2279]: E0113 21:53:31.672134 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.15:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-2-4850f65211.novalocal.181a5f2354b55e4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-4850f65211.novalocal,UID:ci-4081-3-0-2-4850f65211.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-4850f65211.novalocal,},FirstTimestamp:2025-01-13 21:53:31.654413903 +0000 UTC m=+1.793833233,LastTimestamp:2025-01-13 21:53:31.654413903 +0000 UTC m=+1.793833233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-4850f65211.novalocal,}" Jan 13 21:53:31.676861 kubelet[2279]: I0113 21:53:31.676802 2279 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:53:31.678407 kubelet[2279]: I0113 21:53:31.678366 2279 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:53:31.678535 kubelet[2279]: I0113 21:53:31.678489 2279 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:53:31.688152 kubelet[2279]: W0113 21:53:31.688046 2279 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.688451 kubelet[2279]: E0113 21:53:31.688425 2279 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.688771 kubelet[2279]: E0113 21:53:31.688715 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-4850f65211.novalocal?timeout=10s\": dial tcp 172.24.4.15:6443: connect: connection refused" interval="200ms" Jan 13 21:53:31.692236 kubelet[2279]: I0113 21:53:31.692193 2279 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:53:31.692677 kubelet[2279]: I0113 21:53:31.692634 2279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:53:31.699779 kubelet[2279]: E0113 21:53:31.699574 2279 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:53:31.702012 kubelet[2279]: I0113 21:53:31.700302 2279 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:53:31.737182 kubelet[2279]: I0113 21:53:31.737098 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:53:31.742814 kubelet[2279]: I0113 21:53:31.742768 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:53:31.743211 kubelet[2279]: I0113 21:53:31.743153 2279 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:53:31.743300 kubelet[2279]: I0113 21:53:31.743238 2279 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:53:31.743559 kubelet[2279]: E0113 21:53:31.743367 2279 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:53:31.746281 kubelet[2279]: W0113 21:53:31.746225 2279 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.748018 kubelet[2279]: E0113 21:53:31.747931 2279 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:31.759635 kubelet[2279]: I0113 21:53:31.759603 2279 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:53:31.759635 kubelet[2279]: I0113 21:53:31.759623 2279 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:53:31.759635 kubelet[2279]: I0113 21:53:31.759641 2279 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:53:31.766358 kubelet[2279]: I0113 21:53:31.766324 2279 policy_none.go:49] "None policy: Start" Jan 13 21:53:31.766953 kubelet[2279]: I0113 21:53:31.766934 2279 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:53:31.767029 kubelet[2279]: I0113 21:53:31.767018 2279 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:53:31.774975 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:53:31.779287 kubelet[2279]: I0113 21:53:31.779190 2279 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.779788 kubelet[2279]: E0113 21:53:31.779730 2279 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.15:6443/api/v1/nodes\": dial tcp 172.24.4.15:6443: connect: connection refused" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.787561 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:53:31.792753 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:53:31.803825 kubelet[2279]: I0113 21:53:31.803764 2279 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:53:31.804028 kubelet[2279]: I0113 21:53:31.803954 2279 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:53:31.804417 kubelet[2279]: I0113 21:53:31.804077 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:53:31.806280 kubelet[2279]: E0113 21:53:31.806234 2279 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-2-4850f65211.novalocal\" not found" Jan 13 21:53:31.843774 kubelet[2279]: I0113 21:53:31.843631 2279 topology_manager.go:215] "Topology Admit Handler" podUID="0a9f75d4273a9dc35e9ce0b7efb74dc7" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.847105 kubelet[2279]: I0113 21:53:31.846472 2279 topology_manager.go:215] "Topology Admit Handler" podUID="e9f3e1fd02e01bc074b602e2b0eee98b" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.848887 kubelet[2279]: I0113 21:53:31.848663 2279 topology_manager.go:215] "Topology Admit Handler" podUID="4f02ef699d0723af2ff59f7ad812d224" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.858197 systemd[1]: Created slice kubepods-burstable-pod0a9f75d4273a9dc35e9ce0b7efb74dc7.slice - libcontainer container kubepods-burstable-pod0a9f75d4273a9dc35e9ce0b7efb74dc7.slice. Jan 13 21:53:31.876911 systemd[1]: Created slice kubepods-burstable-pode9f3e1fd02e01bc074b602e2b0eee98b.slice - libcontainer container kubepods-burstable-pode9f3e1fd02e01bc074b602e2b0eee98b.slice. Jan 13 21:53:31.879526 kubelet[2279]: I0113 21:53:31.879121 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f02ef699d0723af2ff59f7ad812d224-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"4f02ef699d0723af2ff59f7ad812d224\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.879526 kubelet[2279]: I0113 21:53:31.879195 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a9f75d4273a9dc35e9ce0b7efb74dc7-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"0a9f75d4273a9dc35e9ce0b7efb74dc7\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.879526 kubelet[2279]: I0113 21:53:31.879247 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a9f75d4273a9dc35e9ce0b7efb74dc7-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"0a9f75d4273a9dc35e9ce0b7efb74dc7\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.879526 kubelet[2279]: I0113 21:53:31.879290 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a9f75d4273a9dc35e9ce0b7efb74dc7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"0a9f75d4273a9dc35e9ce0b7efb74dc7\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.879526 kubelet[2279]: I0113 21:53:31.879331 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.880119 kubelet[2279]: I0113 21:53:31.879375 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.880119 kubelet[2279]: I0113 21:53:31.879417 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.880119 kubelet[2279]: I0113 21:53:31.879456 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.880119 kubelet[2279]: I0113 21:53:31.879549 2279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.890001 kubelet[2279]: E0113 21:53:31.889834 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-4850f65211.novalocal?timeout=10s\": dial tcp 172.24.4.15:6443: connect: connection refused" interval="400ms" Jan 13 21:53:31.896185 systemd[1]: Created slice kubepods-burstable-pod4f02ef699d0723af2ff59f7ad812d224.slice - libcontainer container kubepods-burstable-pod4f02ef699d0723af2ff59f7ad812d224.slice. Jan 13 21:53:31.983603 kubelet[2279]: I0113 21:53:31.983484 2279 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:31.984159 kubelet[2279]: E0113 21:53:31.984007 2279 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.15:6443/api/v1/nodes\": dial tcp 172.24.4.15:6443: connect: connection refused" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:32.171427 containerd[1453]: time="2025-01-13T21:53:32.171188121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal,Uid:0a9f75d4273a9dc35e9ce0b7efb74dc7,Namespace:kube-system,Attempt:0,}" Jan 13 21:53:32.196331 containerd[1453]: time="2025-01-13T21:53:32.195912223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal,Uid:e9f3e1fd02e01bc074b602e2b0eee98b,Namespace:kube-system,Attempt:0,}" Jan 13 21:53:32.202599 containerd[1453]: time="2025-01-13T21:53:32.202092319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal,Uid:4f02ef699d0723af2ff59f7ad812d224,Namespace:kube-system,Attempt:0,}" Jan 13 21:53:32.291146 kubelet[2279]: E0113 21:53:32.291044 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-4850f65211.novalocal?timeout=10s\": dial tcp 172.24.4.15:6443: connect: connection refused" interval="800ms" Jan 13 21:53:32.388554 kubelet[2279]: I0113 21:53:32.388022 2279 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:32.389037 kubelet[2279]: E0113 21:53:32.388925 2279 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.15:6443/api/v1/nodes\": dial tcp 172.24.4.15:6443: connect: connection refused" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:32.641529 kubelet[2279]: W0113 21:53:32.641216 2279 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:32.641529 kubelet[2279]: E0113 21:53:32.641411 2279 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:32.680399 kubelet[2279]: W0113 21:53:32.680296 2279 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:32.680600 kubelet[2279]: E0113 21:53:32.680431 2279 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:32.799992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993626672.mount: Deactivated successfully. Jan 13 21:53:32.818404 containerd[1453]: time="2025-01-13T21:53:32.818198168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:53:32.820684 containerd[1453]: time="2025-01-13T21:53:32.820560235Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:53:32.822925 containerd[1453]: time="2025-01-13T21:53:32.822829760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:53:32.823752 containerd[1453]: time="2025-01-13T21:53:32.823617005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:53:32.824998 containerd[1453]: time="2025-01-13T21:53:32.824865406Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:53:32.828167 containerd[1453]: time="2025-01-13T21:53:32.827899600Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 21:53:32.828167 containerd[1453]: time="2025-01-13T21:53:32.827920652Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:53:32.834536 containerd[1453]: time="2025-01-13T21:53:32.834424328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:53:32.838863 containerd[1453]: time="2025-01-13T21:53:32.838246330Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 636.011259ms" Jan 13 21:53:32.847484 containerd[1453]: time="2025-01-13T21:53:32.847399704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.29165ms" Jan 13 21:53:32.848731 containerd[1453]: time="2025-01-13T21:53:32.848684688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.346833ms" Jan 13 21:53:33.058135 containerd[1453]: time="2025-01-13T21:53:33.056754649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:53:33.058135 containerd[1453]: time="2025-01-13T21:53:33.056899200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:53:33.060231 containerd[1453]: time="2025-01-13T21:53:33.056946008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:33.066498 containerd[1453]: time="2025-01-13T21:53:33.066214047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:33.069222 containerd[1453]: time="2025-01-13T21:53:33.068540599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:53:33.069222 containerd[1453]: time="2025-01-13T21:53:33.068668567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:53:33.069222 containerd[1453]: time="2025-01-13T21:53:33.068713660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:33.069222 containerd[1453]: time="2025-01-13T21:53:33.068885649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:33.073343 containerd[1453]: time="2025-01-13T21:53:33.073111693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:53:33.074523 containerd[1453]: time="2025-01-13T21:53:33.074272009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:53:33.074523 containerd[1453]: time="2025-01-13T21:53:33.074330286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:33.077226 containerd[1453]: time="2025-01-13T21:53:33.074909982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:33.100314 kubelet[2279]: E0113 21:53:33.099747 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-4850f65211.novalocal?timeout=10s\": dial tcp 172.24.4.15:6443: connect: connection refused" interval="1.6s" Jan 13 21:53:33.101220 systemd[1]: Started cri-containerd-57895c449d16fc32a2bd56b18e9b5b029dfce072a13d8c0544506bdb81322113.scope - libcontainer container 57895c449d16fc32a2bd56b18e9b5b029dfce072a13d8c0544506bdb81322113. Jan 13 21:53:33.125134 systemd[1]: Started cri-containerd-35c8f70cecfbab2236518644530c0698eda0f4ffb1b46e8a5da2b00be2ada6a1.scope - libcontainer container 35c8f70cecfbab2236518644530c0698eda0f4ffb1b46e8a5da2b00be2ada6a1. Jan 13 21:53:33.127019 systemd[1]: Started cri-containerd-8ae886c428906cf4b039dff9b47ded7e9b6be2db5792ea44bb826095d41d0944.scope - libcontainer container 8ae886c428906cf4b039dff9b47ded7e9b6be2db5792ea44bb826095d41d0944. Jan 13 21:53:33.132995 kubelet[2279]: W0113 21:53:33.132790 2279 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-4850f65211.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:33.132995 kubelet[2279]: E0113 21:53:33.132894 2279 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-4850f65211.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:33.151380 kubelet[2279]: W0113 21:53:33.151285 2279 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:33.151380 kubelet[2279]: E0113 21:53:33.151359 2279 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:33.188327 kubelet[2279]: E0113 21:53:33.188273 2279 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.15:6443: connect: connection refused Jan 13 21:53:33.195795 kubelet[2279]: I0113 21:53:33.194816 2279 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:33.196119 kubelet[2279]: E0113 21:53:33.196094 2279 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.15:6443/api/v1/nodes\": dial tcp 172.24.4.15:6443: connect: connection refused" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:33.199989 containerd[1453]: time="2025-01-13T21:53:33.199154098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal,Uid:e9f3e1fd02e01bc074b602e2b0eee98b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ae886c428906cf4b039dff9b47ded7e9b6be2db5792ea44bb826095d41d0944\"" Jan 13 21:53:33.206188 containerd[1453]: time="2025-01-13T21:53:33.206122711Z" level=info msg="CreateContainer within sandbox \"8ae886c428906cf4b039dff9b47ded7e9b6be2db5792ea44bb826095d41d0944\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:53:33.207519 containerd[1453]: time="2025-01-13T21:53:33.207486646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal,Uid:0a9f75d4273a9dc35e9ce0b7efb74dc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"35c8f70cecfbab2236518644530c0698eda0f4ffb1b46e8a5da2b00be2ada6a1\"" Jan 13 21:53:33.226636 containerd[1453]: time="2025-01-13T21:53:33.226539870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal,Uid:4f02ef699d0723af2ff59f7ad812d224,Namespace:kube-system,Attempt:0,} returns sandbox id \"57895c449d16fc32a2bd56b18e9b5b029dfce072a13d8c0544506bdb81322113\"" Jan 13 21:53:33.228071 containerd[1453]: time="2025-01-13T21:53:33.227777785Z" level=info msg="CreateContainer within sandbox \"35c8f70cecfbab2236518644530c0698eda0f4ffb1b46e8a5da2b00be2ada6a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:53:33.232613 containerd[1453]: time="2025-01-13T21:53:33.232573323Z" level=info msg="CreateContainer within sandbox \"57895c449d16fc32a2bd56b18e9b5b029dfce072a13d8c0544506bdb81322113\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:53:33.280087 containerd[1453]: time="2025-01-13T21:53:33.280023300Z" level=info msg="CreateContainer within sandbox \"35c8f70cecfbab2236518644530c0698eda0f4ffb1b46e8a5da2b00be2ada6a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b939106829d97b26e0c9ebceaf37d1069600dbb5ffcf09ad3a9021251ec9eaa3\"" Jan 13 21:53:33.284577 containerd[1453]: time="2025-01-13T21:53:33.284346785Z" level=info msg="StartContainer for \"b939106829d97b26e0c9ebceaf37d1069600dbb5ffcf09ad3a9021251ec9eaa3\"" Jan 13 21:53:33.285399 containerd[1453]: time="2025-01-13T21:53:33.285163257Z" level=info msg="CreateContainer within sandbox \"8ae886c428906cf4b039dff9b47ded7e9b6be2db5792ea44bb826095d41d0944\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8285775cda703f72dd02acaa15c2276a0691d5c5d8aa2789ff7dbea820b28cb\"" Jan 13 21:53:33.286602 containerd[1453]: time="2025-01-13T21:53:33.286524193Z" level=info msg="CreateContainer within sandbox \"57895c449d16fc32a2bd56b18e9b5b029dfce072a13d8c0544506bdb81322113\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0d1cc020996913f6d384fdd9efaeb5f3c289101d2509adee2f2c2dadf26f26b\"" Jan 13 21:53:33.287178 containerd[1453]: time="2025-01-13T21:53:33.287109046Z" level=info msg="StartContainer for \"e8285775cda703f72dd02acaa15c2276a0691d5c5d8aa2789ff7dbea820b28cb\"" Jan 13 21:53:33.289999 containerd[1453]: time="2025-01-13T21:53:33.289106691Z" level=info msg="StartContainer for \"f0d1cc020996913f6d384fdd9efaeb5f3c289101d2509adee2f2c2dadf26f26b\"" Jan 13 21:53:33.345192 systemd[1]: Started cri-containerd-b939106829d97b26e0c9ebceaf37d1069600dbb5ffcf09ad3a9021251ec9eaa3.scope - libcontainer container b939106829d97b26e0c9ebceaf37d1069600dbb5ffcf09ad3a9021251ec9eaa3. Jan 13 21:53:33.361293 systemd[1]: Started cri-containerd-f0d1cc020996913f6d384fdd9efaeb5f3c289101d2509adee2f2c2dadf26f26b.scope - libcontainer container f0d1cc020996913f6d384fdd9efaeb5f3c289101d2509adee2f2c2dadf26f26b. Jan 13 21:53:33.373124 systemd[1]: Started cri-containerd-e8285775cda703f72dd02acaa15c2276a0691d5c5d8aa2789ff7dbea820b28cb.scope - libcontainer container e8285775cda703f72dd02acaa15c2276a0691d5c5d8aa2789ff7dbea820b28cb. Jan 13 21:53:33.425540 containerd[1453]: time="2025-01-13T21:53:33.425479930Z" level=info msg="StartContainer for \"b939106829d97b26e0c9ebceaf37d1069600dbb5ffcf09ad3a9021251ec9eaa3\" returns successfully" Jan 13 21:53:33.451566 containerd[1453]: time="2025-01-13T21:53:33.451313818Z" level=info msg="StartContainer for \"e8285775cda703f72dd02acaa15c2276a0691d5c5d8aa2789ff7dbea820b28cb\" returns successfully" Jan 13 21:53:33.457318 containerd[1453]: time="2025-01-13T21:53:33.457259601Z" level=info msg="StartContainer for \"f0d1cc020996913f6d384fdd9efaeb5f3c289101d2509adee2f2c2dadf26f26b\" returns successfully" Jan 13 21:53:34.798798 kubelet[2279]: I0113 21:53:34.798610 2279 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:35.397974 kubelet[2279]: E0113 21:53:35.397883 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-2-4850f65211.novalocal\" not found" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:35.471370 kubelet[2279]: I0113 21:53:35.471320 2279 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:36.355086 kubelet[2279]: I0113 21:53:36.354749 2279 apiserver.go:52] "Watching apiserver" Jan 13 21:53:36.379487 kubelet[2279]: I0113 21:53:36.379407 2279 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:53:36.530770 kubelet[2279]: W0113 21:53:36.530687 2279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:53:37.836392 kubelet[2279]: W0113 21:53:37.836295 2279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:53:38.064726 systemd[1]: Reloading requested from client PID 2553 ('systemctl') (unit session-11.scope)... Jan 13 21:53:38.064763 systemd[1]: Reloading... Jan 13 21:53:38.193033 zram_generator::config[2592]: No configuration found. Jan 13 21:53:38.370027 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:53:38.470204 systemd[1]: Reloading finished in 404 ms. Jan 13 21:53:38.516278 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:53:38.529100 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:53:38.529342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:53:38.529390 systemd[1]: kubelet.service: Consumed 1.290s CPU time, 116.2M memory peak, 0B memory swap peak. Jan 13 21:53:38.535226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:53:38.746897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:53:38.756209 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:53:38.806588 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:53:38.806588 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:53:38.806588 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:53:38.806979 kubelet[2655]: I0113 21:53:38.806656 2655 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:53:38.814647 kubelet[2655]: I0113 21:53:38.814607 2655 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:53:38.814647 kubelet[2655]: I0113 21:53:38.814639 2655 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:53:38.814884 kubelet[2655]: I0113 21:53:38.814859 2655 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:53:38.816494 kubelet[2655]: I0113 21:53:38.816470 2655 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:53:38.817826 kubelet[2655]: I0113 21:53:38.817693 2655 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:53:38.824992 kubelet[2655]: I0113 21:53:38.824937 2655 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:53:38.825142 kubelet[2655]: I0113 21:53:38.825121 2655 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:53:38.825320 kubelet[2655]: I0113 21:53:38.825146 2655 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-4850f65211.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:53:38.825509 kubelet[2655]: I0113 21:53:38.825333 2655 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:53:38.825509 kubelet[2655]: I0113 21:53:38.825345 2655 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:53:38.825509 kubelet[2655]: I0113 21:53:38.825385 2655 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:53:38.825509 kubelet[2655]: I0113 21:53:38.825494 2655 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:53:38.825509 kubelet[2655]: I0113 21:53:38.825506 2655 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:53:38.826272 kubelet[2655]: I0113 21:53:38.825525 2655 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:53:38.826272 kubelet[2655]: I0113 21:53:38.825542 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:53:38.826931 kubelet[2655]: I0113 21:53:38.826916 2655 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:53:38.827221 kubelet[2655]: I0113 21:53:38.827207 2655 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:53:38.827723 kubelet[2655]: I0113 21:53:38.827710 2655 server.go:1264] "Started kubelet" Jan 13 21:53:38.832034 kubelet[2655]: I0113 21:53:38.831242 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:53:38.832034 kubelet[2655]: I0113 21:53:38.831494 2655 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:53:38.832811 kubelet[2655]: I0113 21:53:38.832457 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:53:38.835032 kubelet[2655]: I0113 21:53:38.834491 2655 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:53:38.838221 kubelet[2655]: I0113 21:53:38.836892 2655 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:53:38.842174 kubelet[2655]: I0113 21:53:38.841564 2655 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:53:38.842174 kubelet[2655]: I0113 21:53:38.842033 2655 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:53:38.842268 kubelet[2655]: I0113 21:53:38.842219 2655 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:53:38.853032 kubelet[2655]: I0113 21:53:38.849900 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:53:38.853032 kubelet[2655]: I0113 21:53:38.851458 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:53:38.853032 kubelet[2655]: I0113 21:53:38.851499 2655 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:53:38.853032 kubelet[2655]: I0113 21:53:38.851517 2655 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:53:38.853032 kubelet[2655]: E0113 21:53:38.851557 2655 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:53:38.874082 kubelet[2655]: I0113 21:53:38.873789 2655 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:53:38.874082 kubelet[2655]: I0113 21:53:38.873811 2655 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:53:38.874082 kubelet[2655]: I0113 21:53:38.873889 2655 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:53:38.879136 kubelet[2655]: E0113 21:53:38.879114 2655 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:53:38.939500 kubelet[2655]: I0113 21:53:38.939475 2655 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:53:38.939500 kubelet[2655]: I0113 21:53:38.939490 2655 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:53:38.939500 kubelet[2655]: I0113 21:53:38.939506 2655 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:53:38.939791 kubelet[2655]: I0113 21:53:38.939741 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:53:38.939791 kubelet[2655]: I0113 21:53:38.939754 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:53:38.939791 kubelet[2655]: I0113 21:53:38.939772 2655 policy_none.go:49] "None policy: Start" Jan 13 21:53:38.940652 kubelet[2655]: I0113 21:53:38.940635 2655 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:53:38.940705 kubelet[2655]: I0113 21:53:38.940656 2655 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:53:38.940792 kubelet[2655]: I0113 21:53:38.940777 2655 state_mem.go:75] "Updated machine memory state" Jan 13 21:53:38.945667 kubelet[2655]: I0113 21:53:38.945646 2655 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:38.953477 kubelet[2655]: I0113 21:53:38.952472 2655 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:53:38.953477 kubelet[2655]: E0113 21:53:38.952569 2655 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:53:38.953477 kubelet[2655]: I0113 21:53:38.952737 2655 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:53:38.953477 kubelet[2655]: I0113 21:53:38.952853 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:53:38.957789 kubelet[2655]: I0113 21:53:38.957748 2655 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:38.957996 kubelet[2655]: I0113 21:53:38.957811 2655 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.026608 sudo[2689]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:53:39.027307 sudo[2689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:53:39.152922 kubelet[2655]: I0113 21:53:39.152856 2655 topology_manager.go:215] "Topology Admit Handler" podUID="4f02ef699d0723af2ff59f7ad812d224" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.153080 kubelet[2655]: I0113 21:53:39.152987 2655 topology_manager.go:215] "Topology Admit Handler" podUID="0a9f75d4273a9dc35e9ce0b7efb74dc7" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.153080 kubelet[2655]: I0113 21:53:39.153064 2655 topology_manager.go:215] "Topology Admit Handler" podUID="e9f3e1fd02e01bc074b602e2b0eee98b" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.166633 kubelet[2655]: W0113 21:53:39.166435 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:53:39.166633 kubelet[2655]: E0113 21:53:39.166618 2655 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.167047 kubelet[2655]: W0113 21:53:39.166910 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:53:39.167047 kubelet[2655]: E0113 21:53:39.166990 2655 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.168969 kubelet[2655]: W0113 21:53:39.167261 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:53:39.243007 kubelet[2655]: I0113 21:53:39.242781 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a9f75d4273a9dc35e9ce0b7efb74dc7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"0a9f75d4273a9dc35e9ce0b7efb74dc7\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.243007 kubelet[2655]: I0113 21:53:39.242817 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.243007 kubelet[2655]: I0113 21:53:39.242843 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a9f75d4273a9dc35e9ce0b7efb74dc7-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"0a9f75d4273a9dc35e9ce0b7efb74dc7\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.243007 kubelet[2655]: I0113 21:53:39.242864 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a9f75d4273a9dc35e9ce0b7efb74dc7-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"0a9f75d4273a9dc35e9ce0b7efb74dc7\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.243232 kubelet[2655]: I0113 21:53:39.242883 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.243232 kubelet[2655]: I0113 21:53:39.242920 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.243232 kubelet[2655]: I0113 21:53:39.242944 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.243635 kubelet[2655]: I0113 21:53:39.243616 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9f3e1fd02e01bc074b602e2b0eee98b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"e9f3e1fd02e01bc074b602e2b0eee98b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.243701 kubelet[2655]: I0113 21:53:39.243651 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f02ef699d0723af2ff59f7ad812d224-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal\" (UID: \"4f02ef699d0723af2ff59f7ad812d224\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.590491 sudo[2689]: pam_unix(sudo:session): session closed for user root Jan 13 21:53:39.826640 kubelet[2655]: I0113 21:53:39.826615 2655 apiserver.go:52] "Watching apiserver" Jan 13 21:53:39.842502 kubelet[2655]: I0113 21:53:39.842424 2655 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:53:39.923130 kubelet[2655]: W0113 21:53:39.923105 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 13 21:53:39.923234 kubelet[2655]: E0113 21:53:39.923164 2655 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" Jan 13 21:53:39.955301 kubelet[2655]: I0113 21:53:39.955228 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-2-4850f65211.novalocal" podStartSLOduration=0.955211973 podStartE2EDuration="955.211973ms" podCreationTimestamp="2025-01-13 21:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:53:39.941357241 +0000 UTC m=+1.180879979" watchObservedRunningTime="2025-01-13 21:53:39.955211973 +0000 UTC m=+1.194734712" Jan 13 21:53:39.955435 kubelet[2655]: I0113 21:53:39.955358 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-4850f65211.novalocal" podStartSLOduration=2.955353487 podStartE2EDuration="2.955353487s" podCreationTimestamp="2025-01-13 21:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:53:39.954080634 +0000 UTC m=+1.193603372" watchObservedRunningTime="2025-01-13 21:53:39.955353487 +0000 UTC m=+1.194876225" Jan 13 21:53:39.979262 kubelet[2655]: I0113 21:53:39.979201 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-2-4850f65211.novalocal" podStartSLOduration=3.979180946 podStartE2EDuration="3.979180946s" podCreationTimestamp="2025-01-13 21:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:53:39.966662794 +0000 UTC m=+1.206185562" watchObservedRunningTime="2025-01-13 21:53:39.979180946 +0000 UTC m=+1.218703684" Jan 13 21:53:42.266542 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 13 21:53:42.549042 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 13 21:53:42.556353 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:53:42.557099 systemd[1]: sshd@8-172.24.4.15:22-172.24.4.1:56272.service: Deactivated successfully. Jan 13 21:53:42.562345 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:53:42.563304 systemd[1]: session-11.scope: Consumed 7.545s CPU time, 192.7M memory peak, 0B memory swap peak. Jan 13 21:53:42.568792 systemd-logind[1442]: Removed session 11. Jan 13 21:53:51.369046 kubelet[2655]: I0113 21:53:51.368898 2655 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:53:51.371257 containerd[1453]: time="2025-01-13T21:53:51.369905557Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:53:51.372626 kubelet[2655]: I0113 21:53:51.371517 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:53:52.027691 kubelet[2655]: I0113 21:53:52.027613 2655 topology_manager.go:215] "Topology Admit Handler" podUID="9c1d45d2-7b8d-44da-abe7-7e48faf08659" podNamespace="kube-system" podName="kube-proxy-4dgxd" Jan 13 21:53:52.050667 systemd[1]: Created slice kubepods-besteffort-pod9c1d45d2_7b8d_44da_abe7_7e48faf08659.slice - libcontainer container kubepods-besteffort-pod9c1d45d2_7b8d_44da_abe7_7e48faf08659.slice. Jan 13 21:53:52.055841 kubelet[2655]: I0113 21:53:52.052612 2655 topology_manager.go:215] "Topology Admit Handler" podUID="8ae44bab-0bf3-4977-abe2-686505fc1d70" podNamespace="kube-system" podName="cilium-9ppmq" Jan 13 21:53:52.077436 systemd[1]: Created slice kubepods-burstable-pod8ae44bab_0bf3_4977_abe2_686505fc1d70.slice - libcontainer container kubepods-burstable-pod8ae44bab_0bf3_4977_abe2_686505fc1d70.slice. Jan 13 21:53:52.132360 kubelet[2655]: I0113 21:53:52.132309 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cni-path\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132360 kubelet[2655]: I0113 21:53:52.132350 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ae44bab-0bf3-4977-abe2-686505fc1d70-hubble-tls\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132526 kubelet[2655]: I0113 21:53:52.132372 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c1d45d2-7b8d-44da-abe7-7e48faf08659-xtables-lock\") pod \"kube-proxy-4dgxd\" (UID: \"9c1d45d2-7b8d-44da-abe7-7e48faf08659\") " pod="kube-system/kube-proxy-4dgxd" Jan 13 21:53:52.132526 kubelet[2655]: I0113 21:53:52.132394 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-run\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132526 kubelet[2655]: I0113 21:53:52.132411 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-config-path\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132526 kubelet[2655]: I0113 21:53:52.132430 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-host-proc-sys-kernel\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132526 kubelet[2655]: I0113 21:53:52.132452 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-hostproc\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132526 kubelet[2655]: I0113 21:53:52.132483 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-etc-cni-netd\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132683 kubelet[2655]: I0113 21:53:52.132501 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-xtables-lock\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132683 kubelet[2655]: I0113 21:53:52.132517 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c1d45d2-7b8d-44da-abe7-7e48faf08659-kube-proxy\") pod \"kube-proxy-4dgxd\" (UID: \"9c1d45d2-7b8d-44da-abe7-7e48faf08659\") " pod="kube-system/kube-proxy-4dgxd" Jan 13 21:53:52.132683 kubelet[2655]: I0113 21:53:52.132534 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c1d45d2-7b8d-44da-abe7-7e48faf08659-lib-modules\") pod \"kube-proxy-4dgxd\" (UID: \"9c1d45d2-7b8d-44da-abe7-7e48faf08659\") " pod="kube-system/kube-proxy-4dgxd" Jan 13 21:53:52.132683 kubelet[2655]: I0113 21:53:52.132550 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-bpf-maps\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132683 kubelet[2655]: I0113 21:53:52.132569 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tjzx\" (UniqueName: \"kubernetes.io/projected/9c1d45d2-7b8d-44da-abe7-7e48faf08659-kube-api-access-6tjzx\") pod \"kube-proxy-4dgxd\" (UID: \"9c1d45d2-7b8d-44da-abe7-7e48faf08659\") " pod="kube-system/kube-proxy-4dgxd" Jan 13 21:53:52.132683 kubelet[2655]: I0113 21:53:52.132589 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-cgroup\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132829 kubelet[2655]: I0113 21:53:52.132609 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-lib-modules\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132829 kubelet[2655]: I0113 21:53:52.132625 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ae44bab-0bf3-4977-abe2-686505fc1d70-clustermesh-secrets\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132829 kubelet[2655]: I0113 21:53:52.132642 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-host-proc-sys-net\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.132829 kubelet[2655]: I0113 21:53:52.132661 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65x2\" (UniqueName: \"kubernetes.io/projected/8ae44bab-0bf3-4977-abe2-686505fc1d70-kube-api-access-h65x2\") pod \"cilium-9ppmq\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " pod="kube-system/cilium-9ppmq" Jan 13 21:53:52.292458 kubelet[2655]: I0113 21:53:52.292355 2655 topology_manager.go:215] "Topology Admit Handler" podUID="fcd799eb-a4c0-44ef-a120-5fb2f0404b3e" podNamespace="kube-system" podName="cilium-operator-599987898-rpzjn" Jan 13 21:53:52.313181 systemd[1]: Created slice kubepods-besteffort-podfcd799eb_a4c0_44ef_a120_5fb2f0404b3e.slice - libcontainer container kubepods-besteffort-podfcd799eb_a4c0_44ef_a120_5fb2f0404b3e.slice. Jan 13 21:53:52.334171 kubelet[2655]: I0113 21:53:52.334090 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e-cilium-config-path\") pod \"cilium-operator-599987898-rpzjn\" (UID: \"fcd799eb-a4c0-44ef-a120-5fb2f0404b3e\") " pod="kube-system/cilium-operator-599987898-rpzjn" Jan 13 21:53:52.334171 kubelet[2655]: I0113 21:53:52.334149 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jcph\" (UniqueName: \"kubernetes.io/projected/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e-kube-api-access-9jcph\") pod \"cilium-operator-599987898-rpzjn\" (UID: \"fcd799eb-a4c0-44ef-a120-5fb2f0404b3e\") " pod="kube-system/cilium-operator-599987898-rpzjn" Jan 13 21:53:52.363789 containerd[1453]: time="2025-01-13T21:53:52.363700499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dgxd,Uid:9c1d45d2-7b8d-44da-abe7-7e48faf08659,Namespace:kube-system,Attempt:0,}" Jan 13 21:53:52.388877 containerd[1453]: time="2025-01-13T21:53:52.385732319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ppmq,Uid:8ae44bab-0bf3-4977-abe2-686505fc1d70,Namespace:kube-system,Attempt:0,}" Jan 13 21:53:52.423707 containerd[1453]: time="2025-01-13T21:53:52.422602978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:53:52.423707 containerd[1453]: time="2025-01-13T21:53:52.422724540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:53:52.423707 containerd[1453]: time="2025-01-13T21:53:52.422770279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:52.427275 containerd[1453]: time="2025-01-13T21:53:52.427116098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:52.468005 containerd[1453]: time="2025-01-13T21:53:52.467835843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:53:52.468207 systemd[1]: Started cri-containerd-652112bc21818ac5a28a9b7a83a8d437dede12a79c4b488ffd3b6ea4f0a78866.scope - libcontainer container 652112bc21818ac5a28a9b7a83a8d437dede12a79c4b488ffd3b6ea4f0a78866. Jan 13 21:53:52.468978 containerd[1453]: time="2025-01-13T21:53:52.468738649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:53:52.469842 containerd[1453]: time="2025-01-13T21:53:52.469116747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:52.471522 containerd[1453]: time="2025-01-13T21:53:52.471437306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:52.498140 systemd[1]: Started cri-containerd-3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9.scope - libcontainer container 3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9. Jan 13 21:53:52.507494 containerd[1453]: time="2025-01-13T21:53:52.507429075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dgxd,Uid:9c1d45d2-7b8d-44da-abe7-7e48faf08659,Namespace:kube-system,Attempt:0,} returns sandbox id \"652112bc21818ac5a28a9b7a83a8d437dede12a79c4b488ffd3b6ea4f0a78866\"" Jan 13 21:53:52.511697 containerd[1453]: time="2025-01-13T21:53:52.511661009Z" level=info msg="CreateContainer within sandbox \"652112bc21818ac5a28a9b7a83a8d437dede12a79c4b488ffd3b6ea4f0a78866\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:53:52.534713 containerd[1453]: time="2025-01-13T21:53:52.534343441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ppmq,Uid:8ae44bab-0bf3-4977-abe2-686505fc1d70,Namespace:kube-system,Attempt:0,} returns sandbox id \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\"" Jan 13 21:53:52.536509 containerd[1453]: time="2025-01-13T21:53:52.536478484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:53:52.548997 containerd[1453]: time="2025-01-13T21:53:52.548768978Z" level=info msg="CreateContainer within sandbox \"652112bc21818ac5a28a9b7a83a8d437dede12a79c4b488ffd3b6ea4f0a78866\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4e034f431f227b95d2a04fe1de7b5c120a2448dae89b977172b0d5c7b09b1e84\"" Jan 13 21:53:52.549432 containerd[1453]: time="2025-01-13T21:53:52.549402599Z" level=info msg="StartContainer for \"4e034f431f227b95d2a04fe1de7b5c120a2448dae89b977172b0d5c7b09b1e84\"" Jan 13 21:53:52.584093 systemd[1]: Started cri-containerd-4e034f431f227b95d2a04fe1de7b5c120a2448dae89b977172b0d5c7b09b1e84.scope - libcontainer container 4e034f431f227b95d2a04fe1de7b5c120a2448dae89b977172b0d5c7b09b1e84. Jan 13 21:53:52.612438 containerd[1453]: time="2025-01-13T21:53:52.612386279Z" level=info msg="StartContainer for \"4e034f431f227b95d2a04fe1de7b5c120a2448dae89b977172b0d5c7b09b1e84\" returns successfully" Jan 13 21:53:52.626819 containerd[1453]: time="2025-01-13T21:53:52.626771780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rpzjn,Uid:fcd799eb-a4c0-44ef-a120-5fb2f0404b3e,Namespace:kube-system,Attempt:0,}" Jan 13 21:53:52.661435 containerd[1453]: time="2025-01-13T21:53:52.661304388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:53:52.661435 containerd[1453]: time="2025-01-13T21:53:52.661384300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:53:52.661821 containerd[1453]: time="2025-01-13T21:53:52.661406704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:52.663067 containerd[1453]: time="2025-01-13T21:53:52.661921811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:53:52.688474 systemd[1]: Started cri-containerd-67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1.scope - libcontainer container 67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1. Jan 13 21:53:52.740169 containerd[1453]: time="2025-01-13T21:53:52.740003693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rpzjn,Uid:fcd799eb-a4c0-44ef-a120-5fb2f0404b3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1\"" Jan 13 21:53:57.414987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394557022.mount: Deactivated successfully. Jan 13 21:54:00.536864 containerd[1453]: time="2025-01-13T21:54:00.536704059Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:54:00.542302 containerd[1453]: time="2025-01-13T21:54:00.541768555Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734731" Jan 13 21:54:00.555051 containerd[1453]: time="2025-01-13T21:54:00.554912283Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:54:00.566697 containerd[1453]: time="2025-01-13T21:54:00.566423868Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.029898356s" Jan 13 21:54:00.566697 containerd[1453]: time="2025-01-13T21:54:00.566519362Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:54:00.570392 containerd[1453]: time="2025-01-13T21:54:00.568902166Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:54:00.572821 containerd[1453]: time="2025-01-13T21:54:00.572701639Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:54:00.612297 containerd[1453]: time="2025-01-13T21:54:00.612219469Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\"" Jan 13 21:54:00.614517 containerd[1453]: time="2025-01-13T21:54:00.613923973Z" level=info msg="StartContainer for \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\"" Jan 13 21:54:00.660235 systemd[1]: Started cri-containerd-696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07.scope - libcontainer container 696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07. Jan 13 21:54:00.693318 containerd[1453]: time="2025-01-13T21:54:00.693265947Z" level=info msg="StartContainer for \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\" returns successfully" Jan 13 21:54:00.700553 systemd[1]: cri-containerd-696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07.scope: Deactivated successfully. Jan 13 21:54:01.047047 kubelet[2655]: I0113 21:54:01.046875 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4dgxd" podStartSLOduration=9.046840873 podStartE2EDuration="9.046840873s" podCreationTimestamp="2025-01-13 21:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:53:52.968482764 +0000 UTC m=+14.208005502" watchObservedRunningTime="2025-01-13 21:54:01.046840873 +0000 UTC m=+22.286363661" Jan 13 21:54:01.209051 containerd[1453]: time="2025-01-13T21:54:01.208806506Z" level=info msg="shim disconnected" id=696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07 namespace=k8s.io Jan 13 21:54:01.209051 containerd[1453]: time="2025-01-13T21:54:01.208922865Z" level=warning msg="cleaning up after shim disconnected" id=696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07 namespace=k8s.io Jan 13 21:54:01.209051 containerd[1453]: time="2025-01-13T21:54:01.208946998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:54:01.596883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07-rootfs.mount: Deactivated successfully. Jan 13 21:54:02.011283 containerd[1453]: time="2025-01-13T21:54:02.009478161Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:54:02.046100 containerd[1453]: time="2025-01-13T21:54:02.046044886Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\"" Jan 13 21:54:02.047876 containerd[1453]: time="2025-01-13T21:54:02.046974079Z" level=info msg="StartContainer for \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\"" Jan 13 21:54:02.084133 systemd[1]: Started cri-containerd-398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5.scope - libcontainer container 398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5. Jan 13 21:54:02.115837 containerd[1453]: time="2025-01-13T21:54:02.115726772Z" level=info msg="StartContainer for \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\" returns successfully" Jan 13 21:54:02.125654 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:54:02.125996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:54:02.126085 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:54:02.133425 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:54:02.133669 systemd[1]: cri-containerd-398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5.scope: Deactivated successfully. Jan 13 21:54:02.150567 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:54:02.164752 containerd[1453]: time="2025-01-13T21:54:02.164527556Z" level=info msg="shim disconnected" id=398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5 namespace=k8s.io Jan 13 21:54:02.164752 containerd[1453]: time="2025-01-13T21:54:02.164589462Z" level=warning msg="cleaning up after shim disconnected" id=398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5 namespace=k8s.io Jan 13 21:54:02.164752 containerd[1453]: time="2025-01-13T21:54:02.164602782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:54:02.595948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5-rootfs.mount: Deactivated successfully. Jan 13 21:54:03.013204 containerd[1453]: time="2025-01-13T21:54:03.012500939Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:54:03.078674 containerd[1453]: time="2025-01-13T21:54:03.075330420Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\"" Jan 13 21:54:03.083256 containerd[1453]: time="2025-01-13T21:54:03.080431714Z" level=info msg="StartContainer for \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\"" Jan 13 21:54:03.132128 systemd[1]: Started cri-containerd-91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1.scope - libcontainer container 91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1. Jan 13 21:54:03.162796 systemd[1]: cri-containerd-91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1.scope: Deactivated successfully. Jan 13 21:54:03.164383 containerd[1453]: time="2025-01-13T21:54:03.164207486Z" level=info msg="StartContainer for \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\" returns successfully" Jan 13 21:54:03.196927 containerd[1453]: time="2025-01-13T21:54:03.196851316Z" level=info msg="shim disconnected" id=91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1 namespace=k8s.io Jan 13 21:54:03.197114 containerd[1453]: time="2025-01-13T21:54:03.196950283Z" level=warning msg="cleaning up after shim disconnected" id=91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1 namespace=k8s.io Jan 13 21:54:03.197114 containerd[1453]: time="2025-01-13T21:54:03.197013371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:54:03.595604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1-rootfs.mount: Deactivated successfully. Jan 13 21:54:04.030398 containerd[1453]: time="2025-01-13T21:54:04.030204844Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:54:04.081923 containerd[1453]: time="2025-01-13T21:54:04.081784665Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\"" Jan 13 21:54:04.087425 containerd[1453]: time="2025-01-13T21:54:04.085408718Z" level=info msg="StartContainer for \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\"" Jan 13 21:54:04.147297 systemd[1]: Started cri-containerd-028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8.scope - libcontainer container 028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8. Jan 13 21:54:04.184313 systemd[1]: cri-containerd-028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8.scope: Deactivated successfully. Jan 13 21:54:04.191814 containerd[1453]: time="2025-01-13T21:54:04.191773641Z" level=info msg="StartContainer for \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\" returns successfully" Jan 13 21:54:04.249767 containerd[1453]: time="2025-01-13T21:54:04.249664213Z" level=info msg="shim disconnected" id=028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8 namespace=k8s.io Jan 13 21:54:04.249767 containerd[1453]: time="2025-01-13T21:54:04.249749239Z" level=warning msg="cleaning up after shim disconnected" id=028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8 namespace=k8s.io Jan 13 21:54:04.249767 containerd[1453]: time="2025-01-13T21:54:04.249760274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:54:04.597242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8-rootfs.mount: Deactivated successfully. Jan 13 21:54:05.033406 containerd[1453]: time="2025-01-13T21:54:05.033199984Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:54:05.051061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134221254.mount: Deactivated successfully. Jan 13 21:54:05.062473 containerd[1453]: time="2025-01-13T21:54:05.062425200Z" level=info msg="CreateContainer within sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\"" Jan 13 21:54:05.063295 containerd[1453]: time="2025-01-13T21:54:05.063265676Z" level=info msg="StartContainer for \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\"" Jan 13 21:54:05.119127 systemd[1]: Started cri-containerd-de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917.scope - libcontainer container de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917. Jan 13 21:54:05.168597 containerd[1453]: time="2025-01-13T21:54:05.168551085Z" level=info msg="StartContainer for \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\" returns successfully" Jan 13 21:54:05.249749 containerd[1453]: time="2025-01-13T21:54:05.249135777Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:54:05.252420 containerd[1453]: time="2025-01-13T21:54:05.252385096Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907181" Jan 13 21:54:05.254674 containerd[1453]: time="2025-01-13T21:54:05.254402538Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:54:05.260667 containerd[1453]: time="2025-01-13T21:54:05.260613631Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.691585134s" Jan 13 21:54:05.260886 containerd[1453]: time="2025-01-13T21:54:05.260864907Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:54:05.263652 containerd[1453]: time="2025-01-13T21:54:05.263626175Z" level=info msg="CreateContainer within sandbox \"67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:54:05.287212 containerd[1453]: time="2025-01-13T21:54:05.286880069Z" level=info msg="CreateContainer within sandbox \"67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\"" Jan 13 21:54:05.288905 containerd[1453]: time="2025-01-13T21:54:05.288488392Z" level=info msg="StartContainer for \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\"" Jan 13 21:54:05.320332 systemd[1]: Started cri-containerd-a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f.scope - libcontainer container a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f. Jan 13 21:54:05.328173 kubelet[2655]: I0113 21:54:05.325577 2655 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:54:05.363799 kubelet[2655]: I0113 21:54:05.363744 2655 topology_manager.go:215] "Topology Admit Handler" podUID="86889f44-c02f-4cba-b2d2-981e9f9daf11" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rd6fd" Jan 13 21:54:05.368279 kubelet[2655]: I0113 21:54:05.367026 2655 topology_manager.go:215] "Topology Admit Handler" podUID="a5c322fd-2b89-4b7b-8035-4b9acc3d3d4d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fvvlx" Jan 13 21:54:05.373669 systemd[1]: Created slice kubepods-burstable-pod86889f44_c02f_4cba_b2d2_981e9f9daf11.slice - libcontainer container kubepods-burstable-pod86889f44_c02f_4cba_b2d2_981e9f9daf11.slice. Jan 13 21:54:05.382880 systemd[1]: Created slice kubepods-burstable-poda5c322fd_2b89_4b7b_8035_4b9acc3d3d4d.slice - libcontainer container kubepods-burstable-poda5c322fd_2b89_4b7b_8035_4b9acc3d3d4d.slice. Jan 13 21:54:05.423103 containerd[1453]: time="2025-01-13T21:54:05.422707775Z" level=info msg="StartContainer for \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\" returns successfully" Jan 13 21:54:05.434074 kubelet[2655]: I0113 21:54:05.434012 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86889f44-c02f-4cba-b2d2-981e9f9daf11-config-volume\") pod \"coredns-7db6d8ff4d-rd6fd\" (UID: \"86889f44-c02f-4cba-b2d2-981e9f9daf11\") " pod="kube-system/coredns-7db6d8ff4d-rd6fd" Jan 13 21:54:05.434074 kubelet[2655]: I0113 21:54:05.434064 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5c322fd-2b89-4b7b-8035-4b9acc3d3d4d-config-volume\") pod \"coredns-7db6d8ff4d-fvvlx\" (UID: \"a5c322fd-2b89-4b7b-8035-4b9acc3d3d4d\") " pod="kube-system/coredns-7db6d8ff4d-fvvlx" Jan 13 21:54:05.434245 kubelet[2655]: I0113 21:54:05.434091 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j54fn\" (UniqueName: \"kubernetes.io/projected/a5c322fd-2b89-4b7b-8035-4b9acc3d3d4d-kube-api-access-j54fn\") pod \"coredns-7db6d8ff4d-fvvlx\" (UID: \"a5c322fd-2b89-4b7b-8035-4b9acc3d3d4d\") " pod="kube-system/coredns-7db6d8ff4d-fvvlx" Jan 13 21:54:05.434245 kubelet[2655]: I0113 21:54:05.434112 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd2vp\" (UniqueName: \"kubernetes.io/projected/86889f44-c02f-4cba-b2d2-981e9f9daf11-kube-api-access-cd2vp\") pod \"coredns-7db6d8ff4d-rd6fd\" (UID: \"86889f44-c02f-4cba-b2d2-981e9f9daf11\") " pod="kube-system/coredns-7db6d8ff4d-rd6fd" Jan 13 21:54:05.679521 containerd[1453]: time="2025-01-13T21:54:05.678851251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rd6fd,Uid:86889f44-c02f-4cba-b2d2-981e9f9daf11,Namespace:kube-system,Attempt:0,}" Jan 13 21:54:05.693473 containerd[1453]: time="2025-01-13T21:54:05.693415438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fvvlx,Uid:a5c322fd-2b89-4b7b-8035-4b9acc3d3d4d,Namespace:kube-system,Attempt:0,}" Jan 13 21:54:06.176402 kubelet[2655]: I0113 21:54:06.176119 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-rpzjn" podStartSLOduration=1.655871886 podStartE2EDuration="14.176101574s" podCreationTimestamp="2025-01-13 21:53:52 +0000 UTC" firstStartedPulling="2025-01-13 21:53:52.741692736 +0000 UTC m=+13.981215474" lastFinishedPulling="2025-01-13 21:54:05.261922414 +0000 UTC m=+26.501445162" observedRunningTime="2025-01-13 21:54:06.056377535 +0000 UTC m=+27.295900283" watchObservedRunningTime="2025-01-13 21:54:06.176101574 +0000 UTC m=+27.415624322" Jan 13 21:54:07.692709 systemd-networkd[1366]: cilium_host: Link UP Jan 13 21:54:07.694335 systemd-networkd[1366]: cilium_net: Link UP Jan 13 21:54:07.694579 systemd-networkd[1366]: cilium_net: Gained carrier Jan 13 21:54:07.694814 systemd-networkd[1366]: cilium_host: Gained carrier Jan 13 21:54:07.806724 systemd-networkd[1366]: cilium_vxlan: Link UP Jan 13 21:54:07.806733 systemd-networkd[1366]: cilium_vxlan: Gained carrier Jan 13 21:54:07.896172 systemd-networkd[1366]: cilium_net: Gained IPv6LL Jan 13 21:54:08.134121 kernel: NET: Registered PF_ALG protocol family Jan 13 21:54:08.191250 systemd-networkd[1366]: cilium_host: Gained IPv6LL Jan 13 21:54:08.890851 systemd-networkd[1366]: lxc_health: Link UP Jan 13 21:54:08.903651 systemd-networkd[1366]: lxc_health: Gained carrier Jan 13 21:54:09.008134 systemd-networkd[1366]: cilium_vxlan: Gained IPv6LL Jan 13 21:54:09.251353 systemd-networkd[1366]: lxc83fb1d9d150d: Link UP Jan 13 21:54:09.255051 kernel: eth0: renamed from tmpc585e Jan 13 21:54:09.260843 systemd-networkd[1366]: lxc83fb1d9d150d: Gained carrier Jan 13 21:54:09.292142 systemd-networkd[1366]: lxcf27a779addd3: Link UP Jan 13 21:54:09.299067 kernel: eth0: renamed from tmpff83d Jan 13 21:54:09.309086 systemd-networkd[1366]: lxcf27a779addd3: Gained carrier Jan 13 21:54:09.967205 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 13 21:54:10.465267 kubelet[2655]: I0113 21:54:10.464857 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9ppmq" podStartSLOduration=10.432369103 podStartE2EDuration="18.464835543s" podCreationTimestamp="2025-01-13 21:53:52 +0000 UTC" firstStartedPulling="2025-01-13 21:53:52.536094252 +0000 UTC m=+13.775616990" lastFinishedPulling="2025-01-13 21:54:00.568560641 +0000 UTC m=+21.808083430" observedRunningTime="2025-01-13 21:54:06.189269797 +0000 UTC m=+27.428792565" watchObservedRunningTime="2025-01-13 21:54:10.464835543 +0000 UTC m=+31.704358281" Jan 13 21:54:10.479224 systemd-networkd[1366]: lxc83fb1d9d150d: Gained IPv6LL Jan 13 21:54:10.991181 systemd-networkd[1366]: lxcf27a779addd3: Gained IPv6LL Jan 13 21:54:13.766410 containerd[1453]: time="2025-01-13T21:54:13.766145946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:54:13.766410 containerd[1453]: time="2025-01-13T21:54:13.766202565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:54:13.766410 containerd[1453]: time="2025-01-13T21:54:13.766217917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:54:13.766410 containerd[1453]: time="2025-01-13T21:54:13.766309740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:54:13.801249 systemd[1]: Started cri-containerd-c585e2e7a982941b67e7053946eeab98ba339cbb2a5bc8a87536a22c873dee88.scope - libcontainer container c585e2e7a982941b67e7053946eeab98ba339cbb2a5bc8a87536a22c873dee88. Jan 13 21:54:13.825893 containerd[1453]: time="2025-01-13T21:54:13.824493340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:54:13.825893 containerd[1453]: time="2025-01-13T21:54:13.825551785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:54:13.825893 containerd[1453]: time="2025-01-13T21:54:13.825566125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:54:13.825893 containerd[1453]: time="2025-01-13T21:54:13.825737846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:54:13.865164 systemd[1]: Started cri-containerd-ff83d88c105b4161007e064a9670f96e88e3037340e7ce3480f6996516b4db39.scope - libcontainer container ff83d88c105b4161007e064a9670f96e88e3037340e7ce3480f6996516b4db39. Jan 13 21:54:13.885089 containerd[1453]: time="2025-01-13T21:54:13.885021518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rd6fd,Uid:86889f44-c02f-4cba-b2d2-981e9f9daf11,Namespace:kube-system,Attempt:0,} returns sandbox id \"c585e2e7a982941b67e7053946eeab98ba339cbb2a5bc8a87536a22c873dee88\"" Jan 13 21:54:13.890658 containerd[1453]: time="2025-01-13T21:54:13.890625280Z" level=info msg="CreateContainer within sandbox \"c585e2e7a982941b67e7053946eeab98ba339cbb2a5bc8a87536a22c873dee88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:54:13.922177 containerd[1453]: time="2025-01-13T21:54:13.921951132Z" level=info msg="CreateContainer within sandbox \"c585e2e7a982941b67e7053946eeab98ba339cbb2a5bc8a87536a22c873dee88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb0759daf2e0d1fe6c8d2d7060dc69130d3ceb493b2b626cc75c6d2595f1027f\"" Jan 13 21:54:13.923649 containerd[1453]: time="2025-01-13T21:54:13.923180707Z" level=info msg="StartContainer for \"bb0759daf2e0d1fe6c8d2d7060dc69130d3ceb493b2b626cc75c6d2595f1027f\"" Jan 13 21:54:13.945929 containerd[1453]: time="2025-01-13T21:54:13.945859558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fvvlx,Uid:a5c322fd-2b89-4b7b-8035-4b9acc3d3d4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff83d88c105b4161007e064a9670f96e88e3037340e7ce3480f6996516b4db39\"" Jan 13 21:54:13.956251 containerd[1453]: time="2025-01-13T21:54:13.956207524Z" level=info msg="CreateContainer within sandbox \"ff83d88c105b4161007e064a9670f96e88e3037340e7ce3480f6996516b4db39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:54:13.980258 systemd[1]: Started cri-containerd-bb0759daf2e0d1fe6c8d2d7060dc69130d3ceb493b2b626cc75c6d2595f1027f.scope - libcontainer container bb0759daf2e0d1fe6c8d2d7060dc69130d3ceb493b2b626cc75c6d2595f1027f. Jan 13 21:54:13.988000 containerd[1453]: time="2025-01-13T21:54:13.987250481Z" level=info msg="CreateContainer within sandbox \"ff83d88c105b4161007e064a9670f96e88e3037340e7ce3480f6996516b4db39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a898df86089a8d50e6d573d7d1bac4c10b098a03a31fab87a977049472b0840e\"" Jan 13 21:54:13.988271 containerd[1453]: time="2025-01-13T21:54:13.988241055Z" level=info msg="StartContainer for \"a898df86089a8d50e6d573d7d1bac4c10b098a03a31fab87a977049472b0840e\"" Jan 13 21:54:14.041912 systemd[1]: Started cri-containerd-a898df86089a8d50e6d573d7d1bac4c10b098a03a31fab87a977049472b0840e.scope - libcontainer container a898df86089a8d50e6d573d7d1bac4c10b098a03a31fab87a977049472b0840e. Jan 13 21:54:14.051134 containerd[1453]: time="2025-01-13T21:54:14.050884036Z" level=info msg="StartContainer for \"bb0759daf2e0d1fe6c8d2d7060dc69130d3ceb493b2b626cc75c6d2595f1027f\" returns successfully" Jan 13 21:54:14.093188 kubelet[2655]: I0113 21:54:14.093057 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rd6fd" podStartSLOduration=22.092736961 podStartE2EDuration="22.092736961s" podCreationTimestamp="2025-01-13 21:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:54:14.088401283 +0000 UTC m=+35.327924051" watchObservedRunningTime="2025-01-13 21:54:14.092736961 +0000 UTC m=+35.332259699" Jan 13 21:54:14.096573 containerd[1453]: time="2025-01-13T21:54:14.096526213Z" level=info msg="StartContainer for \"a898df86089a8d50e6d573d7d1bac4c10b098a03a31fab87a977049472b0840e\" returns successfully" Jan 13 21:54:15.141950 kubelet[2655]: I0113 21:54:15.140842 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fvvlx" podStartSLOduration=23.140805506 podStartE2EDuration="23.140805506s" podCreationTimestamp="2025-01-13 21:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:54:15.107866685 +0000 UTC m=+36.347389473" watchObservedRunningTime="2025-01-13 21:54:15.140805506 +0000 UTC m=+36.380328294" Jan 13 21:54:36.561694 update_engine[1443]: I20250113 21:54:36.561579 1443 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 21:54:36.561694 update_engine[1443]: I20250113 21:54:36.561671 1443 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 21:54:36.562768 update_engine[1443]: I20250113 21:54:36.562089 1443 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 21:54:36.563035 update_engine[1443]: I20250113 21:54:36.562947 1443 omaha_request_params.cc:62] Current group set to lts Jan 13 21:54:36.563271 update_engine[1443]: I20250113 21:54:36.563207 1443 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 21:54:36.563271 update_engine[1443]: I20250113 21:54:36.563246 1443 update_attempter.cc:643] Scheduling an action processor start. Jan 13 21:54:36.563409 update_engine[1443]: I20250113 21:54:36.563278 1443 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 21:54:36.563409 update_engine[1443]: I20250113 21:54:36.563336 1443 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 21:54:36.563585 update_engine[1443]: I20250113 21:54:36.563455 1443 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 21:54:36.563585 update_engine[1443]: I20250113 21:54:36.563477 1443 omaha_request_action.cc:272] Request: Jan 13 21:54:36.563585 update_engine[1443]: Jan 13 21:54:36.563585 update_engine[1443]: Jan 13 21:54:36.563585 update_engine[1443]: Jan 13 21:54:36.563585 update_engine[1443]: Jan 13 21:54:36.563585 update_engine[1443]: Jan 13 21:54:36.563585 update_engine[1443]: Jan 13 21:54:36.563585 update_engine[1443]: Jan 13 21:54:36.563585 update_engine[1443]: Jan 13 21:54:36.563585 update_engine[1443]: I20250113 21:54:36.563491 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 21:54:36.565544 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 21:54:36.566472 update_engine[1443]: I20250113 21:54:36.566411 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 21:54:36.567125 update_engine[1443]: I20250113 21:54:36.567025 1443 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 21:54:36.579654 update_engine[1443]: E20250113 21:54:36.579567 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 21:54:36.579825 update_engine[1443]: I20250113 21:54:36.579710 1443 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 21:54:46.553658 update_engine[1443]: I20250113 21:54:46.553512 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 21:54:46.554477 update_engine[1443]: I20250113 21:54:46.554056 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 21:54:46.554864 update_engine[1443]: I20250113 21:54:46.554760 1443 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 21:54:46.565411 update_engine[1443]: E20250113 21:54:46.565312 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 21:54:46.565576 update_engine[1443]: I20250113 21:54:46.565425 1443 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 21:54:53.400665 systemd[1]: Started sshd@9-172.24.4.15:22-172.24.4.1:53044.service - OpenSSH per-connection server daemon (172.24.4.1:53044). Jan 13 21:54:54.728463 sshd[4042]: Accepted publickey for core from 172.24.4.1 port 53044 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:54:54.731513 sshd[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:54:54.744765 systemd-logind[1442]: New session 12 of user core. Jan 13 21:54:54.756348 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:54:55.469933 sshd[4042]: pam_unix(sshd:session): session closed for user core Jan 13 21:54:55.476085 systemd[1]: sshd@9-172.24.4.15:22-172.24.4.1:53044.service: Deactivated successfully. Jan 13 21:54:55.478631 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:54:55.479744 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:54:55.481449 systemd-logind[1442]: Removed session 12. Jan 13 21:54:56.545261 update_engine[1443]: I20250113 21:54:56.544142 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 21:54:56.545261 update_engine[1443]: I20250113 21:54:56.544633 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 21:54:56.545261 update_engine[1443]: I20250113 21:54:56.545145 1443 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 21:54:56.555480 update_engine[1443]: E20250113 21:54:56.555317 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 21:54:56.555480 update_engine[1443]: I20250113 21:54:56.555428 1443 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 13 21:55:00.493369 systemd[1]: Started sshd@10-172.24.4.15:22-172.24.4.1:45074.service - OpenSSH per-connection server daemon (172.24.4.1:45074). Jan 13 21:55:01.692882 sshd[4056]: Accepted publickey for core from 172.24.4.1 port 45074 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:01.694345 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:01.701094 systemd-logind[1442]: New session 13 of user core. Jan 13 21:55:01.705231 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:55:02.479370 sshd[4056]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:02.487001 systemd[1]: sshd@10-172.24.4.15:22-172.24.4.1:45074.service: Deactivated successfully. Jan 13 21:55:02.492354 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:55:02.494635 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:55:02.498896 systemd-logind[1442]: Removed session 13. Jan 13 21:55:06.545499 update_engine[1443]: I20250113 21:55:06.544435 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 21:55:06.546590 update_engine[1443]: I20250113 21:55:06.545665 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 21:55:06.546590 update_engine[1443]: I20250113 21:55:06.546140 1443 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 21:55:06.570317 update_engine[1443]: E20250113 21:55:06.570225 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 21:55:06.609391 update_engine[1443]: I20250113 21:55:06.570341 1443 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 21:55:06.609391 update_engine[1443]: I20250113 21:55:06.570363 1443 omaha_request_action.cc:617] Omaha request response: Jan 13 21:55:06.609581 update_engine[1443]: E20250113 21:55:06.609418 1443 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 13 21:55:06.609581 update_engine[1443]: I20250113 21:55:06.609493 1443 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 13 21:55:06.609581 update_engine[1443]: I20250113 21:55:06.609508 1443 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 21:55:06.609581 update_engine[1443]: I20250113 21:55:06.609524 1443 update_attempter.cc:306] Processing Done. Jan 13 21:55:06.609581 update_engine[1443]: E20250113 21:55:06.609549 1443 update_attempter.cc:619] Update failed. Jan 13 21:55:06.609581 update_engine[1443]: I20250113 21:55:06.609561 1443 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 13 21:55:06.609581 update_engine[1443]: I20250113 21:55:06.609573 1443 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 13 21:55:06.609581 update_engine[1443]: I20250113 21:55:06.609589 1443 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 13 21:55:06.610133 update_engine[1443]: I20250113 21:55:06.609742 1443 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 21:55:06.610133 update_engine[1443]: I20250113 21:55:06.609791 1443 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 21:55:06.610133 update_engine[1443]: I20250113 21:55:06.609804 1443 omaha_request_action.cc:272] Request: Jan 13 21:55:06.610133 update_engine[1443]: Jan 13 21:55:06.610133 update_engine[1443]: Jan 13 21:55:06.610133 update_engine[1443]: Jan 13 21:55:06.610133 update_engine[1443]: Jan 13 21:55:06.610133 update_engine[1443]: Jan 13 21:55:06.610133 update_engine[1443]: Jan 13 21:55:06.610133 update_engine[1443]: I20250113 21:55:06.609817 1443 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 21:55:06.610621 update_engine[1443]: I20250113 21:55:06.610177 1443 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 21:55:06.610621 update_engine[1443]: I20250113 21:55:06.610523 1443 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 21:55:06.611246 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 13 21:55:06.620889 update_engine[1443]: E20250113 21:55:06.620790 1443 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 21:55:06.621061 update_engine[1443]: I20250113 21:55:06.620901 1443 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 21:55:06.621061 update_engine[1443]: I20250113 21:55:06.620922 1443 omaha_request_action.cc:617] Omaha request response: Jan 13 21:55:06.621061 update_engine[1443]: I20250113 21:55:06.620939 1443 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 21:55:06.621061 update_engine[1443]: I20250113 21:55:06.620950 1443 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 21:55:06.621061 update_engine[1443]: I20250113 21:55:06.621047 1443 update_attempter.cc:306] Processing Done. Jan 13 21:55:06.621369 update_engine[1443]: I20250113 21:55:06.621065 1443 update_attempter.cc:310] Error event sent. Jan 13 21:55:06.621369 update_engine[1443]: I20250113 21:55:06.621161 1443 update_check_scheduler.cc:74] Next update check in 41m4s Jan 13 21:55:06.621811 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 13 21:55:07.503386 systemd[1]: Started sshd@11-172.24.4.15:22-172.24.4.1:49446.service - OpenSSH per-connection server daemon (172.24.4.1:49446). Jan 13 21:55:08.856262 sshd[4072]: Accepted publickey for core from 172.24.4.1 port 49446 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:08.860434 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:08.878240 systemd-logind[1442]: New session 14 of user core. Jan 13 21:55:08.887362 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:55:09.551709 sshd[4072]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:09.558608 systemd[1]: sshd@11-172.24.4.15:22-172.24.4.1:49446.service: Deactivated successfully. Jan 13 21:55:09.565792 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:55:09.568409 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:55:09.570819 systemd-logind[1442]: Removed session 14. Jan 13 21:55:14.582568 systemd[1]: Started sshd@12-172.24.4.15:22-172.24.4.1:36674.service - OpenSSH per-connection server daemon (172.24.4.1:36674). Jan 13 21:55:15.788599 sshd[4086]: Accepted publickey for core from 172.24.4.1 port 36674 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:15.792919 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:15.804100 systemd-logind[1442]: New session 15 of user core. Jan 13 21:55:15.811330 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:55:16.497111 sshd[4086]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:16.505698 systemd[1]: sshd@12-172.24.4.15:22-172.24.4.1:36674.service: Deactivated successfully. Jan 13 21:55:16.507449 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:55:16.510535 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:55:16.517145 systemd[1]: Started sshd@13-172.24.4.15:22-172.24.4.1:36686.service - OpenSSH per-connection server daemon (172.24.4.1:36686). Jan 13 21:55:16.519752 systemd-logind[1442]: Removed session 15. Jan 13 21:55:17.729601 sshd[4100]: Accepted publickey for core from 172.24.4.1 port 36686 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:17.732324 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:17.743704 systemd-logind[1442]: New session 16 of user core. Jan 13 21:55:17.749296 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:55:18.578382 sshd[4100]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:18.585770 systemd[1]: sshd@13-172.24.4.15:22-172.24.4.1:36686.service: Deactivated successfully. Jan 13 21:55:18.588486 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:55:18.589808 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:55:18.597254 systemd[1]: Started sshd@14-172.24.4.15:22-172.24.4.1:36696.service - OpenSSH per-connection server daemon (172.24.4.1:36696). Jan 13 21:55:18.599149 systemd-logind[1442]: Removed session 16. Jan 13 21:55:19.853668 sshd[4111]: Accepted publickey for core from 172.24.4.1 port 36696 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:19.856738 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:19.871462 systemd-logind[1442]: New session 17 of user core. Jan 13 21:55:19.876273 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:55:20.697825 sshd[4111]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:20.705948 systemd[1]: sshd@14-172.24.4.15:22-172.24.4.1:36696.service: Deactivated successfully. Jan 13 21:55:20.711281 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:55:20.713190 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:55:20.716071 systemd-logind[1442]: Removed session 17. Jan 13 21:55:25.725576 systemd[1]: Started sshd@15-172.24.4.15:22-172.24.4.1:51470.service - OpenSSH per-connection server daemon (172.24.4.1:51470). Jan 13 21:55:27.020186 sshd[4125]: Accepted publickey for core from 172.24.4.1 port 51470 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:27.023826 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:27.033223 systemd-logind[1442]: New session 18 of user core. Jan 13 21:55:27.046372 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:55:27.870328 sshd[4125]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:27.878896 systemd[1]: sshd@15-172.24.4.15:22-172.24.4.1:51470.service: Deactivated successfully. Jan 13 21:55:27.882415 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:55:27.885166 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:55:27.894628 systemd[1]: Started sshd@16-172.24.4.15:22-172.24.4.1:51472.service - OpenSSH per-connection server daemon (172.24.4.1:51472). Jan 13 21:55:27.899736 systemd-logind[1442]: Removed session 18. Jan 13 21:55:29.137808 sshd[4137]: Accepted publickey for core from 172.24.4.1 port 51472 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:29.140855 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:29.150939 systemd-logind[1442]: New session 19 of user core. Jan 13 21:55:29.160350 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:55:29.976234 sshd[4137]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:29.989248 systemd[1]: sshd@16-172.24.4.15:22-172.24.4.1:51472.service: Deactivated successfully. Jan 13 21:55:29.994506 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:55:29.997838 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:55:30.007161 systemd[1]: Started sshd@17-172.24.4.15:22-172.24.4.1:51484.service - OpenSSH per-connection server daemon (172.24.4.1:51484). Jan 13 21:55:30.010751 systemd-logind[1442]: Removed session 19. Jan 13 21:55:31.200289 sshd[4148]: Accepted publickey for core from 172.24.4.1 port 51484 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:31.201472 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:31.211050 systemd-logind[1442]: New session 20 of user core. Jan 13 21:55:31.220377 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:55:33.960104 sshd[4148]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:33.972254 systemd[1]: Started sshd@18-172.24.4.15:22-172.24.4.1:40964.service - OpenSSH per-connection server daemon (172.24.4.1:40964). Jan 13 21:55:33.972730 systemd[1]: sshd@17-172.24.4.15:22-172.24.4.1:51484.service: Deactivated successfully. Jan 13 21:55:33.975432 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:55:33.977907 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:55:33.983060 systemd-logind[1442]: Removed session 20. Jan 13 21:55:35.528380 sshd[4164]: Accepted publickey for core from 172.24.4.1 port 40964 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:35.532787 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:35.548576 systemd-logind[1442]: New session 21 of user core. Jan 13 21:55:35.561453 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:55:36.734356 sshd[4164]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:36.747806 systemd[1]: sshd@18-172.24.4.15:22-172.24.4.1:40964.service: Deactivated successfully. Jan 13 21:55:36.751850 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:55:36.754830 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:55:36.764831 systemd[1]: Started sshd@19-172.24.4.15:22-172.24.4.1:40966.service - OpenSSH per-connection server daemon (172.24.4.1:40966). Jan 13 21:55:36.767065 systemd-logind[1442]: Removed session 21. Jan 13 21:55:38.085041 sshd[4179]: Accepted publickey for core from 172.24.4.1 port 40966 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:38.087909 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:38.097501 systemd-logind[1442]: New session 22 of user core. Jan 13 21:55:38.110272 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:55:38.834924 sshd[4179]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:38.844172 systemd[1]: sshd@19-172.24.4.15:22-172.24.4.1:40966.service: Deactivated successfully. Jan 13 21:55:38.850027 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:55:38.853036 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:55:38.858314 systemd-logind[1442]: Removed session 22. Jan 13 21:55:43.868606 systemd[1]: Started sshd@20-172.24.4.15:22-172.24.4.1:55574.service - OpenSSH per-connection server daemon (172.24.4.1:55574). Jan 13 21:55:44.935277 sshd[4197]: Accepted publickey for core from 172.24.4.1 port 55574 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:44.938169 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:44.949266 systemd-logind[1442]: New session 23 of user core. Jan 13 21:55:44.955261 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:55:45.640351 sshd[4197]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:45.650824 systemd[1]: sshd@20-172.24.4.15:22-172.24.4.1:55574.service: Deactivated successfully. Jan 13 21:55:45.655587 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:55:45.658594 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:55:45.661836 systemd-logind[1442]: Removed session 23. Jan 13 21:55:50.666821 systemd[1]: Started sshd@21-172.24.4.15:22-172.24.4.1:55586.service - OpenSSH per-connection server daemon (172.24.4.1:55586). Jan 13 21:55:51.849699 sshd[4209]: Accepted publickey for core from 172.24.4.1 port 55586 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:51.852845 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:51.863385 systemd-logind[1442]: New session 24 of user core. Jan 13 21:55:51.874389 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:55:52.582746 sshd[4209]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:52.595051 systemd[1]: sshd@21-172.24.4.15:22-172.24.4.1:55586.service: Deactivated successfully. Jan 13 21:55:52.598331 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:55:52.603824 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:55:52.610362 systemd[1]: Started sshd@22-172.24.4.15:22-172.24.4.1:55592.service - OpenSSH per-connection server daemon (172.24.4.1:55592). Jan 13 21:55:52.612769 systemd-logind[1442]: Removed session 24. Jan 13 21:55:54.091193 sshd[4222]: Accepted publickey for core from 172.24.4.1 port 55592 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:54.095330 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:54.106109 systemd-logind[1442]: New session 25 of user core. Jan 13 21:55:54.113363 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:55:56.180390 containerd[1453]: time="2025-01-13T21:55:56.180329027Z" level=info msg="StopContainer for \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\" with timeout 30 (s)" Jan 13 21:55:56.181134 containerd[1453]: time="2025-01-13T21:55:56.180883855Z" level=info msg="Stop container \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\" with signal terminated" Jan 13 21:55:56.200287 systemd[1]: cri-containerd-a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f.scope: Deactivated successfully. Jan 13 21:55:56.206448 containerd[1453]: time="2025-01-13T21:55:56.206089263Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:55:56.212919 containerd[1453]: time="2025-01-13T21:55:56.212887505Z" level=info msg="StopContainer for \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\" with timeout 2 (s)" Jan 13 21:55:56.213308 containerd[1453]: time="2025-01-13T21:55:56.213287021Z" level=info msg="Stop container \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\" with signal terminated" Jan 13 21:55:56.224842 systemd-networkd[1366]: lxc_health: Link DOWN Jan 13 21:55:56.225418 systemd-networkd[1366]: lxc_health: Lost carrier Jan 13 21:55:56.237262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f-rootfs.mount: Deactivated successfully. Jan 13 21:55:56.244526 systemd[1]: cri-containerd-de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917.scope: Deactivated successfully. Jan 13 21:55:56.244757 systemd[1]: cri-containerd-de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917.scope: Consumed 8.445s CPU time. Jan 13 21:55:56.274875 containerd[1453]: time="2025-01-13T21:55:56.274660586Z" level=info msg="shim disconnected" id=a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f namespace=k8s.io Jan 13 21:55:56.274875 containerd[1453]: time="2025-01-13T21:55:56.274735973Z" level=warning msg="cleaning up after shim disconnected" id=a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f namespace=k8s.io Jan 13 21:55:56.274875 containerd[1453]: time="2025-01-13T21:55:56.274746163Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:55:56.277111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917-rootfs.mount: Deactivated successfully. Jan 13 21:55:56.280424 containerd[1453]: time="2025-01-13T21:55:56.280363321Z" level=info msg="shim disconnected" id=de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917 namespace=k8s.io Jan 13 21:55:56.280652 containerd[1453]: time="2025-01-13T21:55:56.280608847Z" level=warning msg="cleaning up after shim disconnected" id=de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917 namespace=k8s.io Jan 13 21:55:56.280768 containerd[1453]: time="2025-01-13T21:55:56.280750523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:55:56.296298 containerd[1453]: time="2025-01-13T21:55:56.296254651Z" level=info msg="StopContainer for \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\" returns successfully" Jan 13 21:55:56.297385 containerd[1453]: time="2025-01-13T21:55:56.297353184Z" level=info msg="StopPodSandbox for \"67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1\"" Jan 13 21:55:56.297462 containerd[1453]: time="2025-01-13T21:55:56.297400045Z" level=info msg="Container to stop \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:55:56.300275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1-shm.mount: Deactivated successfully. Jan 13 21:55:56.309802 systemd[1]: cri-containerd-67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1.scope: Deactivated successfully. Jan 13 21:55:56.325035 containerd[1453]: time="2025-01-13T21:55:56.324887926Z" level=info msg="StopContainer for \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\" returns successfully" Jan 13 21:55:56.327336 containerd[1453]: time="2025-01-13T21:55:56.325380914Z" level=info msg="StopPodSandbox for \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\"" Jan 13 21:55:56.327336 containerd[1453]: time="2025-01-13T21:55:56.325413497Z" level=info msg="Container to stop \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:55:56.327336 containerd[1453]: time="2025-01-13T21:55:56.325426783Z" level=info msg="Container to stop \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:55:56.327336 containerd[1453]: time="2025-01-13T21:55:56.325437944Z" level=info msg="Container to stop \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:55:56.327336 containerd[1453]: time="2025-01-13T21:55:56.325448414Z" level=info msg="Container to stop \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:55:56.327336 containerd[1453]: time="2025-01-13T21:55:56.325459055Z" level=info msg="Container to stop \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:55:56.328126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9-shm.mount: Deactivated successfully. Jan 13 21:55:56.337879 systemd[1]: cri-containerd-3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9.scope: Deactivated successfully. Jan 13 21:55:56.352446 containerd[1453]: time="2025-01-13T21:55:56.352035025Z" level=info msg="shim disconnected" id=67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1 namespace=k8s.io Jan 13 21:55:56.353570 containerd[1453]: time="2025-01-13T21:55:56.353358315Z" level=warning msg="cleaning up after shim disconnected" id=67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1 namespace=k8s.io Jan 13 21:55:56.353570 containerd[1453]: time="2025-01-13T21:55:56.353379646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:55:56.370004 containerd[1453]: time="2025-01-13T21:55:56.369950046Z" level=info msg="TearDown network for sandbox \"67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1\" successfully" Jan 13 21:55:56.370327 containerd[1453]: time="2025-01-13T21:55:56.370183809Z" level=info msg="StopPodSandbox for \"67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1\" returns successfully" Jan 13 21:55:56.375291 containerd[1453]: time="2025-01-13T21:55:56.375140865Z" level=info msg="shim disconnected" id=3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9 namespace=k8s.io Jan 13 21:55:56.375291 containerd[1453]: time="2025-01-13T21:55:56.375184590Z" level=warning msg="cleaning up after shim disconnected" id=3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9 namespace=k8s.io Jan 13 21:55:56.375291 containerd[1453]: time="2025-01-13T21:55:56.375196103Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:55:56.391636 containerd[1453]: time="2025-01-13T21:55:56.391588636Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:55:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:55:56.392605 containerd[1453]: time="2025-01-13T21:55:56.392558300Z" level=info msg="TearDown network for sandbox \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" successfully" Jan 13 21:55:56.392605 containerd[1453]: time="2025-01-13T21:55:56.392624357Z" level=info msg="StopPodSandbox for \"3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9\" returns successfully" Jan 13 21:55:56.436154 kubelet[2655]: I0113 21:55:56.434814 2655 scope.go:117] "RemoveContainer" containerID="de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917" Jan 13 21:55:56.440510 containerd[1453]: time="2025-01-13T21:55:56.440428262Z" level=info msg="RemoveContainer for \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\"" Jan 13 21:55:56.461602 containerd[1453]: time="2025-01-13T21:55:56.461557063Z" level=info msg="RemoveContainer for \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\" returns successfully" Jan 13 21:55:56.461943 kubelet[2655]: I0113 21:55:56.461845 2655 scope.go:117] "RemoveContainer" containerID="028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8" Jan 13 21:55:56.463352 containerd[1453]: time="2025-01-13T21:55:56.463330737Z" level=info msg="RemoveContainer for \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\"" Jan 13 21:55:56.480424 containerd[1453]: time="2025-01-13T21:55:56.480380237Z" level=info msg="RemoveContainer for \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\" returns successfully" Jan 13 21:55:56.480634 kubelet[2655]: I0113 21:55:56.480593 2655 scope.go:117] "RemoveContainer" containerID="91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1" Jan 13 21:55:56.481801 containerd[1453]: time="2025-01-13T21:55:56.481742923Z" level=info msg="RemoveContainer for \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\"" Jan 13 21:55:56.488041 containerd[1453]: time="2025-01-13T21:55:56.487943985Z" level=info msg="RemoveContainer for \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\" returns successfully" Jan 13 21:55:56.488285 kubelet[2655]: I0113 21:55:56.488167 2655 scope.go:117] "RemoveContainer" containerID="398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5" Jan 13 21:55:56.490056 containerd[1453]: time="2025-01-13T21:55:56.489987303Z" level=info msg="RemoveContainer for \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\"" Jan 13 21:55:56.501423 containerd[1453]: time="2025-01-13T21:55:56.501371481Z" level=info msg="RemoveContainer for \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\" returns successfully" Jan 13 21:55:56.501652 kubelet[2655]: I0113 21:55:56.501583 2655 scope.go:117] "RemoveContainer" containerID="696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07" Jan 13 21:55:56.502713 containerd[1453]: time="2025-01-13T21:55:56.502687978Z" level=info msg="RemoveContainer for \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\"" Jan 13 21:55:56.517150 containerd[1453]: time="2025-01-13T21:55:56.517060920Z" level=info msg="RemoveContainer for \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\" returns successfully" Jan 13 21:55:56.517408 kubelet[2655]: I0113 21:55:56.517273 2655 scope.go:117] "RemoveContainer" containerID="de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917" Jan 13 21:55:56.517913 containerd[1453]: time="2025-01-13T21:55:56.517821496Z" level=error msg="ContainerStatus for \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\": not found" Jan 13 21:55:56.518279 kubelet[2655]: E0113 21:55:56.518121 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\": not found" containerID="de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917" Jan 13 21:55:56.518279 kubelet[2655]: I0113 21:55:56.518150 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917"} err="failed to get container status \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\": rpc error: code = NotFound desc = an error occurred when try to find container \"de118121753e20d4446051fd7597f167c0fea23d4a33275cceb2a8b56eb57917\": not found" Jan 13 21:55:56.518279 kubelet[2655]: I0113 21:55:56.518223 2655 scope.go:117] "RemoveContainer" containerID="028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8" Jan 13 21:55:56.518579 containerd[1453]: time="2025-01-13T21:55:56.518510614Z" level=error msg="ContainerStatus for \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\": not found" Jan 13 21:55:56.518751 kubelet[2655]: E0113 21:55:56.518700 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\": not found" containerID="028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8" Jan 13 21:55:56.518805 kubelet[2655]: I0113 21:55:56.518762 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8"} err="failed to get container status \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"028ed4258064d9d23b143b330a10a1e97f29e20ab8531d5b19b6bd51366e23d8\": not found" Jan 13 21:55:56.518837 kubelet[2655]: I0113 21:55:56.518813 2655 scope.go:117] "RemoveContainer" containerID="91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1" Jan 13 21:55:56.519200 containerd[1453]: time="2025-01-13T21:55:56.519152251Z" level=error msg="ContainerStatus for \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\": not found" Jan 13 21:55:56.519755 kubelet[2655]: E0113 21:55:56.519718 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\": not found" containerID="91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1" Jan 13 21:55:56.519861 kubelet[2655]: I0113 21:55:56.519817 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1"} err="failed to get container status \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"91b128bf797ef5dff6c5ab8d5f9e439ef0b11e2f27ff32f4f283b9f676c401c1\": not found" Jan 13 21:55:56.519941 kubelet[2655]: I0113 21:55:56.519857 2655 scope.go:117] "RemoveContainer" containerID="398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5" Jan 13 21:55:56.520255 containerd[1453]: time="2025-01-13T21:55:56.520227329Z" level=error msg="ContainerStatus for \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\": not found" Jan 13 21:55:56.520537 kubelet[2655]: E0113 21:55:56.520422 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\": not found" containerID="398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5" Jan 13 21:55:56.520537 kubelet[2655]: I0113 21:55:56.520457 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5"} err="failed to get container status \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"398116f71af3038ce81121b284f2a5804afb113e0f0a252ca73ef0ef7103c7a5\": not found" Jan 13 21:55:56.520537 kubelet[2655]: I0113 21:55:56.520475 2655 scope.go:117] "RemoveContainer" containerID="696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07" Jan 13 21:55:56.520841 containerd[1453]: time="2025-01-13T21:55:56.520781495Z" level=error msg="ContainerStatus for \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\": not found" Jan 13 21:55:56.521297 kubelet[2655]: E0113 21:55:56.521025 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\": not found" containerID="696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07" Jan 13 21:55:56.521297 kubelet[2655]: I0113 21:55:56.521048 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07"} err="failed to get container status \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\": rpc error: code = NotFound desc = an error occurred when try to find container \"696a3af630f791db976ae20b3e1f8d69fc9d35c695e39b0f4dc43289d4caec07\": not found" Jan 13 21:55:56.521297 kubelet[2655]: I0113 21:55:56.521068 2655 scope.go:117] "RemoveContainer" containerID="a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f" Jan 13 21:55:56.522625 containerd[1453]: time="2025-01-13T21:55:56.522560149Z" level=info msg="RemoveContainer for \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\"" Jan 13 21:55:56.533989 containerd[1453]: time="2025-01-13T21:55:56.533920751Z" level=info msg="RemoveContainer for \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\" returns successfully" Jan 13 21:55:56.535003 kubelet[2655]: I0113 21:55:56.534217 2655 scope.go:117] "RemoveContainer" containerID="a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f" Jan 13 21:55:56.536076 containerd[1453]: time="2025-01-13T21:55:56.535613928Z" level=error msg="ContainerStatus for \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\": not found" Jan 13 21:55:56.536219 kubelet[2655]: E0113 21:55:56.535884 2655 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\": not found" containerID="a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f" Jan 13 21:55:56.536219 kubelet[2655]: I0113 21:55:56.535932 2655 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f"} err="failed to get container status \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a411af028a27e814c597a329b9d6eee822b0c6a1c3e1923b22788a69d5f1bb9f\": not found" Jan 13 21:55:56.561062 kubelet[2655]: I0113 21:55:56.560477 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-host-proc-sys-kernel\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561062 kubelet[2655]: I0113 21:55:56.560569 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ae44bab-0bf3-4977-abe2-686505fc1d70-clustermesh-secrets\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561062 kubelet[2655]: I0113 21:55:56.560689 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-config-path\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561062 kubelet[2655]: I0113 21:55:56.560694 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.561062 kubelet[2655]: I0113 21:55:56.560737 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cni-path\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561461 kubelet[2655]: I0113 21:55:56.560798 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cni-path" (OuterVolumeSpecName: "cni-path") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.561461 kubelet[2655]: I0113 21:55:56.560866 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-lib-modules\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561461 kubelet[2655]: I0113 21:55:56.561034 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-hostproc\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561461 kubelet[2655]: I0113 21:55:56.561084 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-etc-cni-netd\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561461 kubelet[2655]: I0113 21:55:56.561132 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-cgroup\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561461 kubelet[2655]: I0113 21:55:56.561186 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ae44bab-0bf3-4977-abe2-686505fc1d70-hubble-tls\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561817 kubelet[2655]: I0113 21:55:56.561232 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jcph\" (UniqueName: \"kubernetes.io/projected/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e-kube-api-access-9jcph\") pod \"fcd799eb-a4c0-44ef-a120-5fb2f0404b3e\" (UID: \"fcd799eb-a4c0-44ef-a120-5fb2f0404b3e\") " Jan 13 21:55:56.561817 kubelet[2655]: I0113 21:55:56.561280 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-host-proc-sys-net\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561817 kubelet[2655]: I0113 21:55:56.561327 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e-cilium-config-path\") pod \"fcd799eb-a4c0-44ef-a120-5fb2f0404b3e\" (UID: \"fcd799eb-a4c0-44ef-a120-5fb2f0404b3e\") " Jan 13 21:55:56.561817 kubelet[2655]: I0113 21:55:56.561373 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h65x2\" (UniqueName: \"kubernetes.io/projected/8ae44bab-0bf3-4977-abe2-686505fc1d70-kube-api-access-h65x2\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561817 kubelet[2655]: I0113 21:55:56.561411 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-run\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.561817 kubelet[2655]: I0113 21:55:56.561447 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-xtables-lock\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.562790 kubelet[2655]: I0113 21:55:56.561488 2655 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-bpf-maps\") pod \"8ae44bab-0bf3-4977-abe2-686505fc1d70\" (UID: \"8ae44bab-0bf3-4977-abe2-686505fc1d70\") " Jan 13 21:55:56.562790 kubelet[2655]: I0113 21:55:56.561563 2655 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-host-proc-sys-kernel\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.562790 kubelet[2655]: I0113 21:55:56.561612 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.562790 kubelet[2655]: I0113 21:55:56.561655 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.562790 kubelet[2655]: I0113 21:55:56.561694 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-hostproc" (OuterVolumeSpecName: "hostproc") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.563174 kubelet[2655]: I0113 21:55:56.561729 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.563174 kubelet[2655]: I0113 21:55:56.561761 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.573051 kubelet[2655]: I0113 21:55:56.571460 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae44bab-0bf3-4977-abe2-686505fc1d70-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:55:56.574237 kubelet[2655]: I0113 21:55:56.574162 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.575912 kubelet[2655]: I0113 21:55:56.575815 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.576108 kubelet[2655]: I0113 21:55:56.575926 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:55:56.583027 kubelet[2655]: I0113 21:55:56.582886 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e-kube-api-access-9jcph" (OuterVolumeSpecName: "kube-api-access-9jcph") pod "fcd799eb-a4c0-44ef-a120-5fb2f0404b3e" (UID: "fcd799eb-a4c0-44ef-a120-5fb2f0404b3e"). InnerVolumeSpecName "kube-api-access-9jcph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:55:56.583684 kubelet[2655]: I0113 21:55:56.583620 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:55:56.585188 kubelet[2655]: I0113 21:55:56.583858 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ae44bab-0bf3-4977-abe2-686505fc1d70-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:55:56.586906 kubelet[2655]: I0113 21:55:56.586853 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae44bab-0bf3-4977-abe2-686505fc1d70-kube-api-access-h65x2" (OuterVolumeSpecName: "kube-api-access-h65x2") pod "8ae44bab-0bf3-4977-abe2-686505fc1d70" (UID: "8ae44bab-0bf3-4977-abe2-686505fc1d70"). InnerVolumeSpecName "kube-api-access-h65x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:55:56.590545 kubelet[2655]: I0113 21:55:56.590441 2655 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fcd799eb-a4c0-44ef-a120-5fb2f0404b3e" (UID: "fcd799eb-a4c0-44ef-a120-5fb2f0404b3e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:55:56.662675 kubelet[2655]: I0113 21:55:56.662607 2655 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-config-path\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.662675 kubelet[2655]: I0113 21:55:56.662668 2655 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cni-path\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663023 kubelet[2655]: I0113 21:55:56.662695 2655 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-lib-modules\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663023 kubelet[2655]: I0113 21:55:56.662721 2655 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-cgroup\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663023 kubelet[2655]: I0113 21:55:56.662744 2655 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-hostproc\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663023 kubelet[2655]: I0113 21:55:56.662770 2655 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-etc-cni-netd\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663023 kubelet[2655]: I0113 21:55:56.662797 2655 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ae44bab-0bf3-4977-abe2-686505fc1d70-hubble-tls\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663023 kubelet[2655]: I0113 21:55:56.662822 2655 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9jcph\" (UniqueName: \"kubernetes.io/projected/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e-kube-api-access-9jcph\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663023 kubelet[2655]: I0113 21:55:56.662845 2655 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-host-proc-sys-net\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663567 kubelet[2655]: I0113 21:55:56.662871 2655 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-bpf-maps\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663567 kubelet[2655]: I0113 21:55:56.662896 2655 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e-cilium-config-path\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663567 kubelet[2655]: I0113 21:55:56.662919 2655 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h65x2\" (UniqueName: \"kubernetes.io/projected/8ae44bab-0bf3-4977-abe2-686505fc1d70-kube-api-access-h65x2\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663567 kubelet[2655]: I0113 21:55:56.662942 2655 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-cilium-run\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663567 kubelet[2655]: I0113 21:55:56.663006 2655 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ae44bab-0bf3-4977-abe2-686505fc1d70-xtables-lock\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.663567 kubelet[2655]: I0113 21:55:56.663036 2655 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ae44bab-0bf3-4977-abe2-686505fc1d70-clustermesh-secrets\") on node \"ci-4081-3-0-2-4850f65211.novalocal\" DevicePath \"\"" Jan 13 21:55:56.745111 systemd[1]: Removed slice kubepods-burstable-pod8ae44bab_0bf3_4977_abe2_686505fc1d70.slice - libcontainer container kubepods-burstable-pod8ae44bab_0bf3_4977_abe2_686505fc1d70.slice. Jan 13 21:55:56.745350 systemd[1]: kubepods-burstable-pod8ae44bab_0bf3_4977_abe2_686505fc1d70.slice: Consumed 8.527s CPU time. Jan 13 21:55:56.759886 systemd[1]: Removed slice kubepods-besteffort-podfcd799eb_a4c0_44ef_a120_5fb2f0404b3e.slice - libcontainer container kubepods-besteffort-podfcd799eb_a4c0_44ef_a120_5fb2f0404b3e.slice. Jan 13 21:55:57.189768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67a00b7aac3a074b9ec83976cc9d497ab073b5c69b4414d97d9a1abbf691b6c1-rootfs.mount: Deactivated successfully. Jan 13 21:55:57.190104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3932b4571baa57fe8d9e9a12c6e26b81a55719109a5043433bbbdc4971747de9-rootfs.mount: Deactivated successfully. Jan 13 21:55:57.190366 systemd[1]: var-lib-kubelet-pods-fcd799eb\x2da4c0\x2d44ef\x2da120\x2d5fb2f0404b3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9jcph.mount: Deactivated successfully. Jan 13 21:55:57.190554 systemd[1]: var-lib-kubelet-pods-8ae44bab\x2d0bf3\x2d4977\x2dabe2\x2d686505fc1d70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh65x2.mount: Deactivated successfully. Jan 13 21:55:57.190715 systemd[1]: var-lib-kubelet-pods-8ae44bab\x2d0bf3\x2d4977\x2dabe2\x2d686505fc1d70-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:55:57.190881 systemd[1]: var-lib-kubelet-pods-8ae44bab\x2d0bf3\x2d4977\x2dabe2\x2d686505fc1d70-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:55:58.277365 sshd[4222]: pam_unix(sshd:session): session closed for user core Jan 13 21:55:58.290020 systemd[1]: sshd@22-172.24.4.15:22-172.24.4.1:55592.service: Deactivated successfully. Jan 13 21:55:58.294654 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:55:58.297490 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:55:58.304568 systemd[1]: Started sshd@23-172.24.4.15:22-172.24.4.1:35890.service - OpenSSH per-connection server daemon (172.24.4.1:35890). Jan 13 21:55:58.309428 systemd-logind[1442]: Removed session 25. Jan 13 21:55:58.859497 kubelet[2655]: I0113 21:55:58.858415 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ae44bab-0bf3-4977-abe2-686505fc1d70" path="/var/lib/kubelet/pods/8ae44bab-0bf3-4977-abe2-686505fc1d70/volumes" Jan 13 21:55:58.862933 kubelet[2655]: I0113 21:55:58.862876 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd799eb-a4c0-44ef-a120-5fb2f0404b3e" path="/var/lib/kubelet/pods/fcd799eb-a4c0-44ef-a120-5fb2f0404b3e/volumes" Jan 13 21:55:59.010212 kubelet[2655]: E0113 21:55:59.010112 2655 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:55:59.676243 sshd[4380]: Accepted publickey for core from 172.24.4.1 port 35890 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:55:59.681673 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:55:59.693048 systemd-logind[1442]: New session 26 of user core. Jan 13 21:55:59.699300 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:56:00.990482 kubelet[2655]: I0113 21:56:00.988595 2655 topology_manager.go:215] "Topology Admit Handler" podUID="ebdc5098-a1d1-4406-94a9-097d1276896c" podNamespace="kube-system" podName="cilium-vdjfc" Jan 13 21:56:00.995000 kubelet[2655]: E0113 21:56:00.992639 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ae44bab-0bf3-4977-abe2-686505fc1d70" containerName="mount-bpf-fs" Jan 13 21:56:00.995000 kubelet[2655]: E0113 21:56:00.992672 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ae44bab-0bf3-4977-abe2-686505fc1d70" containerName="cilium-agent" Jan 13 21:56:00.995000 kubelet[2655]: E0113 21:56:00.992681 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fcd799eb-a4c0-44ef-a120-5fb2f0404b3e" containerName="cilium-operator" Jan 13 21:56:00.995000 kubelet[2655]: E0113 21:56:00.992688 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ae44bab-0bf3-4977-abe2-686505fc1d70" containerName="clean-cilium-state" Jan 13 21:56:00.995000 kubelet[2655]: E0113 21:56:00.992706 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ae44bab-0bf3-4977-abe2-686505fc1d70" containerName="mount-cgroup" Jan 13 21:56:00.995000 kubelet[2655]: E0113 21:56:00.992713 2655 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ae44bab-0bf3-4977-abe2-686505fc1d70" containerName="apply-sysctl-overwrites" Jan 13 21:56:00.995000 kubelet[2655]: I0113 21:56:00.992739 2655 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd799eb-a4c0-44ef-a120-5fb2f0404b3e" containerName="cilium-operator" Jan 13 21:56:00.995000 kubelet[2655]: I0113 21:56:00.992751 2655 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae44bab-0bf3-4977-abe2-686505fc1d70" containerName="cilium-agent" Jan 13 21:56:01.005119 systemd[1]: Created slice kubepods-burstable-podebdc5098_a1d1_4406_94a9_097d1276896c.slice - libcontainer container kubepods-burstable-podebdc5098_a1d1_4406_94a9_097d1276896c.slice. Jan 13 21:56:01.093483 kubelet[2655]: I0113 21:56:01.093453 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-bpf-maps\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.093665 kubelet[2655]: I0113 21:56:01.093652 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw447\" (UniqueName: \"kubernetes.io/projected/ebdc5098-a1d1-4406-94a9-097d1276896c-kube-api-access-rw447\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.093802 kubelet[2655]: I0113 21:56:01.093786 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-xtables-lock\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.093914 kubelet[2655]: I0113 21:56:01.093901 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-hostproc\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094050 kubelet[2655]: I0113 21:56:01.094036 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-etc-cni-netd\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094160 kubelet[2655]: I0113 21:56:01.094146 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-host-proc-sys-kernel\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094249 kubelet[2655]: I0113 21:56:01.094236 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-cilium-run\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094330 kubelet[2655]: I0113 21:56:01.094318 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-cni-path\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094417 kubelet[2655]: I0113 21:56:01.094404 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-lib-modules\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094524 kubelet[2655]: I0113 21:56:01.094511 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebdc5098-a1d1-4406-94a9-097d1276896c-clustermesh-secrets\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094622 kubelet[2655]: I0113 21:56:01.094608 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-host-proc-sys-net\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094712 kubelet[2655]: I0113 21:56:01.094700 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebdc5098-a1d1-4406-94a9-097d1276896c-hubble-tls\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094894 kubelet[2655]: I0113 21:56:01.094803 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebdc5098-a1d1-4406-94a9-097d1276896c-cilium-config-path\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094894 kubelet[2655]: I0113 21:56:01.094828 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebdc5098-a1d1-4406-94a9-097d1276896c-cilium-cgroup\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.094894 kubelet[2655]: I0113 21:56:01.094845 2655 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ebdc5098-a1d1-4406-94a9-097d1276896c-cilium-ipsec-secrets\") pod \"cilium-vdjfc\" (UID: \"ebdc5098-a1d1-4406-94a9-097d1276896c\") " pod="kube-system/cilium-vdjfc" Jan 13 21:56:01.155542 sshd[4380]: pam_unix(sshd:session): session closed for user core Jan 13 21:56:01.161707 systemd[1]: sshd@23-172.24.4.15:22-172.24.4.1:35890.service: Deactivated successfully. Jan 13 21:56:01.163785 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:56:01.165597 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:56:01.173819 systemd[1]: Started sshd@24-172.24.4.15:22-172.24.4.1:35892.service - OpenSSH per-connection server daemon (172.24.4.1:35892). Jan 13 21:56:01.176413 systemd-logind[1442]: Removed session 26. Jan 13 21:56:01.314380 containerd[1453]: time="2025-01-13T21:56:01.313589222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vdjfc,Uid:ebdc5098-a1d1-4406-94a9-097d1276896c,Namespace:kube-system,Attempt:0,}" Jan 13 21:56:01.476742 containerd[1453]: time="2025-01-13T21:56:01.475778948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:56:01.476742 containerd[1453]: time="2025-01-13T21:56:01.475832603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:56:01.476742 containerd[1453]: time="2025-01-13T21:56:01.475846129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:56:01.476742 containerd[1453]: time="2025-01-13T21:56:01.475911766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:56:01.505129 systemd[1]: Started cri-containerd-945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809.scope - libcontainer container 945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809. Jan 13 21:56:01.544711 containerd[1453]: time="2025-01-13T21:56:01.544621198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vdjfc,Uid:ebdc5098-a1d1-4406-94a9-097d1276896c,Namespace:kube-system,Attempt:0,} returns sandbox id \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\"" Jan 13 21:56:01.552191 containerd[1453]: time="2025-01-13T21:56:01.552147290Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:56:01.573619 containerd[1453]: time="2025-01-13T21:56:01.573443264Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78d1acff1ef852459ad97562ecac2948d5a9464324961ee011382268be4410a1\"" Jan 13 21:56:01.576384 containerd[1453]: time="2025-01-13T21:56:01.574380447Z" level=info msg="StartContainer for \"78d1acff1ef852459ad97562ecac2948d5a9464324961ee011382268be4410a1\"" Jan 13 21:56:01.611154 systemd[1]: Started cri-containerd-78d1acff1ef852459ad97562ecac2948d5a9464324961ee011382268be4410a1.scope - libcontainer container 78d1acff1ef852459ad97562ecac2948d5a9464324961ee011382268be4410a1. Jan 13 21:56:01.649381 containerd[1453]: time="2025-01-13T21:56:01.648565241Z" level=info msg="StartContainer for \"78d1acff1ef852459ad97562ecac2948d5a9464324961ee011382268be4410a1\" returns successfully" Jan 13 21:56:01.655863 systemd[1]: cri-containerd-78d1acff1ef852459ad97562ecac2948d5a9464324961ee011382268be4410a1.scope: Deactivated successfully. Jan 13 21:56:01.701939 containerd[1453]: time="2025-01-13T21:56:01.701719408Z" level=info msg="shim disconnected" id=78d1acff1ef852459ad97562ecac2948d5a9464324961ee011382268be4410a1 namespace=k8s.io Jan 13 21:56:01.701939 containerd[1453]: time="2025-01-13T21:56:01.701796318Z" level=warning msg="cleaning up after shim disconnected" id=78d1acff1ef852459ad97562ecac2948d5a9464324961ee011382268be4410a1 namespace=k8s.io Jan 13 21:56:01.701939 containerd[1453]: time="2025-01-13T21:56:01.701806838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:56:01.715648 containerd[1453]: time="2025-01-13T21:56:01.714473523Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:56:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:56:01.811264 kubelet[2655]: I0113 21:56:01.810954 2655 setters.go:580] "Node became not ready" node="ci-4081-3-0-2-4850f65211.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:56:01Z","lastTransitionTime":"2025-01-13T21:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:56:02.471248 containerd[1453]: time="2025-01-13T21:56:02.471166897Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:56:02.490053 sshd[4392]: Accepted publickey for core from 172.24.4.1 port 35892 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:56:02.495357 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:56:02.510359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197746618.mount: Deactivated successfully. Jan 13 21:56:02.525475 containerd[1453]: time="2025-01-13T21:56:02.523732784Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4\"" Jan 13 21:56:02.529258 containerd[1453]: time="2025-01-13T21:56:02.528224035Z" level=info msg="StartContainer for \"03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4\"" Jan 13 21:56:02.541753 systemd-logind[1442]: New session 27 of user core. Jan 13 21:56:02.549243 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:56:02.586348 systemd[1]: Started cri-containerd-03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4.scope - libcontainer container 03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4. Jan 13 21:56:02.621700 containerd[1453]: time="2025-01-13T21:56:02.621572716Z" level=info msg="StartContainer for \"03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4\" returns successfully" Jan 13 21:56:02.623653 systemd[1]: cri-containerd-03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4.scope: Deactivated successfully. Jan 13 21:56:02.657523 containerd[1453]: time="2025-01-13T21:56:02.657419543Z" level=info msg="shim disconnected" id=03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4 namespace=k8s.io Jan 13 21:56:02.657523 containerd[1453]: time="2025-01-13T21:56:02.657516471Z" level=warning msg="cleaning up after shim disconnected" id=03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4 namespace=k8s.io Jan 13 21:56:02.657523 containerd[1453]: time="2025-01-13T21:56:02.657528345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:56:03.164280 sshd[4392]: pam_unix(sshd:session): session closed for user core Jan 13 21:56:03.177830 systemd[1]: sshd@24-172.24.4.15:22-172.24.4.1:35892.service: Deactivated successfully. Jan 13 21:56:03.182399 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:56:03.185241 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:56:03.195674 systemd[1]: Started sshd@25-172.24.4.15:22-172.24.4.1:35904.service - OpenSSH per-connection server daemon (172.24.4.1:35904). Jan 13 21:56:03.199510 systemd-logind[1442]: Removed session 27. Jan 13 21:56:03.207673 systemd[1]: run-containerd-runc-k8s.io-03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4-runc.fbiQea.mount: Deactivated successfully. Jan 13 21:56:03.207913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03bbb97f15ce1701d4d5f1d2249b0c43adfdf12a3e448e6b49076f27e21188b4-rootfs.mount: Deactivated successfully. Jan 13 21:56:03.488522 containerd[1453]: time="2025-01-13T21:56:03.484523602Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:56:03.541554 containerd[1453]: time="2025-01-13T21:56:03.541303207Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614\"" Jan 13 21:56:03.546031 containerd[1453]: time="2025-01-13T21:56:03.543997703Z" level=info msg="StartContainer for \"3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614\"" Jan 13 21:56:03.586223 systemd[1]: Started cri-containerd-3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614.scope - libcontainer container 3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614. Jan 13 21:56:03.619172 containerd[1453]: time="2025-01-13T21:56:03.619042227Z" level=info msg="StartContainer for \"3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614\" returns successfully" Jan 13 21:56:03.622600 systemd[1]: cri-containerd-3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614.scope: Deactivated successfully. Jan 13 21:56:03.757178 containerd[1453]: time="2025-01-13T21:56:03.756478382Z" level=info msg="shim disconnected" id=3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614 namespace=k8s.io Jan 13 21:56:03.757178 containerd[1453]: time="2025-01-13T21:56:03.756704693Z" level=warning msg="cleaning up after shim disconnected" id=3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614 namespace=k8s.io Jan 13 21:56:03.757178 containerd[1453]: time="2025-01-13T21:56:03.756914751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:56:03.792468 containerd[1453]: time="2025-01-13T21:56:03.792375750Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:56:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:56:04.012392 kubelet[2655]: E0113 21:56:04.012173 2655 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:56:04.204861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3515a5963fc78d6b216d3ea0ade7c3c1a2ee5d6612af64175d43c4ce4892d614-rootfs.mount: Deactivated successfully. Jan 13 21:56:04.483837 containerd[1453]: time="2025-01-13T21:56:04.483755287Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:56:04.507834 containerd[1453]: time="2025-01-13T21:56:04.507753914Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086\"" Jan 13 21:56:04.510248 containerd[1453]: time="2025-01-13T21:56:04.510206450Z" level=info msg="StartContainer for \"3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086\"" Jan 13 21:56:04.541171 systemd[1]: Started cri-containerd-3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086.scope - libcontainer container 3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086. Jan 13 21:56:04.565200 systemd[1]: cri-containerd-3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086.scope: Deactivated successfully. Jan 13 21:56:04.572254 containerd[1453]: time="2025-01-13T21:56:04.572164675Z" level=info msg="StartContainer for \"3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086\" returns successfully" Jan 13 21:56:04.601843 containerd[1453]: time="2025-01-13T21:56:04.601763039Z" level=info msg="shim disconnected" id=3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086 namespace=k8s.io Jan 13 21:56:04.601843 containerd[1453]: time="2025-01-13T21:56:04.601823216Z" level=warning msg="cleaning up after shim disconnected" id=3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086 namespace=k8s.io Jan 13 21:56:04.601843 containerd[1453]: time="2025-01-13T21:56:04.601836682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:56:04.613005 sshd[4566]: Accepted publickey for core from 172.24.4.1 port 35904 ssh2: RSA SHA256:1PaGXDzsdUtjcdfgab76H31xHHu9Ttfm5+6JfJxGu2Q Jan 13 21:56:04.616075 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:56:04.625128 systemd-logind[1442]: New session 28 of user core. Jan 13 21:56:04.630208 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:56:05.207513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ec456ad2460683e77554d89ea3fecdf81545ec6a1215d84c056e23cae3f0086-rootfs.mount: Deactivated successfully. Jan 13 21:56:05.503344 containerd[1453]: time="2025-01-13T21:56:05.501472531Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:56:05.553850 containerd[1453]: time="2025-01-13T21:56:05.551632349Z" level=info msg="CreateContainer within sandbox \"945ac2523e36593c3f1c76347874b8c3e29dc9298a0779c111a9db2e0b33e809\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6149eeef4ed83fed816a97f0a1d412d88d1be1aabdfb0199051a4ea7cdcda66d\"" Jan 13 21:56:05.556119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310739972.mount: Deactivated successfully. Jan 13 21:56:05.560677 containerd[1453]: time="2025-01-13T21:56:05.560603305Z" level=info msg="StartContainer for \"6149eeef4ed83fed816a97f0a1d412d88d1be1aabdfb0199051a4ea7cdcda66d\"" Jan 13 21:56:05.593094 systemd[1]: Started cri-containerd-6149eeef4ed83fed816a97f0a1d412d88d1be1aabdfb0199051a4ea7cdcda66d.scope - libcontainer container 6149eeef4ed83fed816a97f0a1d412d88d1be1aabdfb0199051a4ea7cdcda66d. Jan 13 21:56:05.688763 containerd[1453]: time="2025-01-13T21:56:05.688607613Z" level=info msg="StartContainer for \"6149eeef4ed83fed816a97f0a1d412d88d1be1aabdfb0199051a4ea7cdcda66d\" returns successfully" Jan 13 21:56:06.032990 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:56:06.083009 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 13 21:56:09.058018 systemd-networkd[1366]: lxc_health: Link UP Jan 13 21:56:09.065095 systemd-networkd[1366]: lxc_health: Gained carrier Jan 13 21:56:09.351833 kubelet[2655]: I0113 21:56:09.351554 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vdjfc" podStartSLOduration=9.351464939 podStartE2EDuration="9.351464939s" podCreationTimestamp="2025-01-13 21:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:56:06.542039474 +0000 UTC m=+147.781562292" watchObservedRunningTime="2025-01-13 21:56:09.351464939 +0000 UTC m=+150.590987717" Jan 13 21:56:10.544670 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 13 21:56:16.731423 sshd[4566]: pam_unix(sshd:session): session closed for user core Jan 13 21:56:16.739042 systemd[1]: sshd@25-172.24.4.15:22-172.24.4.1:35904.service: Deactivated successfully. Jan 13 21:56:16.744471 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:56:16.746935 systemd-logind[1442]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:56:16.749836 systemd-logind[1442]: Removed session 28.