Jan 29 12:41:25.058729 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:41:25.058753 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:41:25.058763 kernel: BIOS-provided physical RAM map: Jan 29 12:41:25.058771 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:41:25.058778 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:41:25.058788 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:41:25.058796 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 29 12:41:25.058804 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 29 12:41:25.058811 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:41:25.058818 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:41:25.058826 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 29 12:41:25.058833 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:41:25.058840 kernel: NX (Execute Disable) protection: active Jan 29 12:41:25.058848 kernel: APIC: Static calls initialized Jan 29 12:41:25.058859 kernel: SMBIOS 3.0.0 present. Jan 29 12:41:25.058867 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 29 12:41:25.058874 kernel: Hypervisor detected: KVM Jan 29 12:41:25.058882 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:41:25.058890 kernel: kvm-clock: using sched offset of 3312319920 cycles Jan 29 12:41:25.058899 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:41:25.058908 kernel: tsc: Detected 1996.249 MHz processor Jan 29 12:41:25.058916 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:41:25.058924 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:41:25.058932 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 29 12:41:25.058940 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:41:25.058948 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:41:25.058956 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 29 12:41:25.058964 kernel: ACPI: Early table checksum verification disabled Jan 29 12:41:25.058974 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 29 12:41:25.058982 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:41:25.058990 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:41:25.058998 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:41:25.059005 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 29 12:41:25.059013 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:41:25.059021 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:41:25.059029 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 29 12:41:25.059037 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 29 12:41:25.059046 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 29 12:41:25.059054 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 29 12:41:25.059062 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 29 12:41:25.059074 kernel: No NUMA configuration found Jan 29 12:41:25.059082 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 29 12:41:25.059090 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 29 12:41:25.059101 kernel: Zone ranges: Jan 29 12:41:25.059109 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:41:25.059117 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:41:25.059125 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:41:25.059134 kernel: Movable zone start for each node Jan 29 12:41:25.059142 kernel: Early memory node ranges Jan 29 12:41:25.059150 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:41:25.059158 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 29 12:41:25.059168 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 29 12:41:25.059176 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 29 12:41:25.059185 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:41:25.059193 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:41:25.059201 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 29 12:41:25.059209 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:41:25.059217 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:41:25.059226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:41:25.059260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:41:25.059271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:41:25.059279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:41:25.059287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:41:25.059295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:41:25.059304 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:41:25.059312 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:41:25.059320 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:41:25.059328 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 29 12:41:25.059336 kernel: Booting paravirtualized kernel on KVM Jan 29 12:41:25.059346 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:41:25.059355 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:41:25.059363 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:41:25.059371 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:41:25.059379 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:41:25.059387 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 12:41:25.059397 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:41:25.059406 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:41:25.059417 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:41:25.059425 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:41:25.059433 kernel: Fallback order for Node 0: 0 Jan 29 12:41:25.059441 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 29 12:41:25.059450 kernel: Policy zone: Normal Jan 29 12:41:25.059458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:41:25.059466 kernel: software IO TLB: area num 2. Jan 29 12:41:25.059475 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 227308K reserved, 0K cma-reserved) Jan 29 12:41:25.059483 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:41:25.059493 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:41:25.059501 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:41:25.059509 kernel: Dynamic Preempt: voluntary Jan 29 12:41:25.059517 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:41:25.059526 kernel: rcu: RCU event tracing is enabled. Jan 29 12:41:25.059535 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:41:25.059543 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:41:25.059552 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:41:25.059560 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:41:25.059568 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:41:25.059579 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:41:25.059587 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:41:25.059595 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:41:25.059603 kernel: Console: colour VGA+ 80x25 Jan 29 12:41:25.059612 kernel: printk: console [tty0] enabled Jan 29 12:41:25.059620 kernel: printk: console [ttyS0] enabled Jan 29 12:41:25.059628 kernel: ACPI: Core revision 20230628 Jan 29 12:41:25.059636 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:41:25.059644 kernel: x2apic enabled Jan 29 12:41:25.059655 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:41:25.059663 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:41:25.059671 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:41:25.059680 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 29 12:41:25.059688 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 12:41:25.059696 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 12:41:25.059704 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:41:25.059712 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:41:25.059720 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:41:25.059731 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:41:25.059739 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:41:25.059747 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 29 12:41:25.059756 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:41:25.059770 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:41:25.059780 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:41:25.059789 kernel: landlock: Up and running. Jan 29 12:41:25.059798 kernel: SELinux: Initializing. Jan 29 12:41:25.059806 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:41:25.059815 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:41:25.059824 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 29 12:41:25.059835 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:41:25.059844 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:41:25.059853 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:41:25.059861 kernel: Performance Events: AMD PMU driver. Jan 29 12:41:25.059870 kernel: ... version: 0 Jan 29 12:41:25.059880 kernel: ... bit width: 48 Jan 29 12:41:25.059889 kernel: ... generic registers: 4 Jan 29 12:41:25.059898 kernel: ... value mask: 0000ffffffffffff Jan 29 12:41:25.059906 kernel: ... max period: 00007fffffffffff Jan 29 12:41:25.059915 kernel: ... fixed-purpose events: 0 Jan 29 12:41:25.059924 kernel: ... event mask: 000000000000000f Jan 29 12:41:25.059932 kernel: signal: max sigframe size: 1440 Jan 29 12:41:25.059941 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:41:25.059950 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:41:25.059960 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:41:25.059969 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:41:25.059978 kernel: .... node #0, CPUs: #1 Jan 29 12:41:25.060454 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:41:25.060467 kernel: smpboot: Max logical packages: 2 Jan 29 12:41:25.060476 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 29 12:41:25.060485 kernel: devtmpfs: initialized Jan 29 12:41:25.060494 kernel: x86/mm: Memory block size: 128MB Jan 29 12:41:25.060503 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:41:25.060511 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:41:25.060525 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:41:25.060534 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:41:25.060543 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:41:25.060551 kernel: audit: type=2000 audit(1738154483.532:1): state=initialized audit_enabled=0 res=1 Jan 29 12:41:25.060560 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:41:25.060569 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:41:25.060577 kernel: cpuidle: using governor menu Jan 29 12:41:25.060586 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:41:25.060594 kernel: dca service started, version 1.12.1 Jan 29 12:41:25.060605 kernel: PCI: Using configuration type 1 for base access Jan 29 12:41:25.060614 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:41:25.060623 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:41:25.060632 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:41:25.060640 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:41:25.060649 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:41:25.060658 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:41:25.060666 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:41:25.060675 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:41:25.060685 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:41:25.060694 kernel: ACPI: Interpreter enabled Jan 29 12:41:25.060703 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:41:25.060711 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:41:25.060720 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:41:25.060729 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:41:25.060738 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 12:41:25.060746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:41:25.060897 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:41:25.061002 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:41:25.061094 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:41:25.061108 kernel: acpiphp: Slot [3] registered Jan 29 12:41:25.061117 kernel: acpiphp: Slot [4] registered Jan 29 12:41:25.061125 kernel: acpiphp: Slot [5] registered Jan 29 12:41:25.061134 kernel: acpiphp: Slot [6] registered Jan 29 12:41:25.061142 kernel: acpiphp: Slot [7] registered Jan 29 12:41:25.061154 kernel: acpiphp: Slot [8] registered Jan 29 12:41:25.061162 kernel: acpiphp: Slot [9] registered Jan 29 12:41:25.061171 kernel: acpiphp: Slot [10] registered Jan 29 12:41:25.061180 kernel: acpiphp: Slot [11] registered Jan 29 12:41:25.061188 kernel: acpiphp: Slot [12] registered Jan 29 12:41:25.061197 kernel: acpiphp: Slot [13] registered Jan 29 12:41:25.061205 kernel: acpiphp: Slot [14] registered Jan 29 12:41:25.061214 kernel: acpiphp: Slot [15] registered Jan 29 12:41:25.061222 kernel: acpiphp: Slot [16] registered Jan 29 12:41:25.063271 kernel: acpiphp: Slot [17] registered Jan 29 12:41:25.063285 kernel: acpiphp: Slot [18] registered Jan 29 12:41:25.063295 kernel: acpiphp: Slot [19] registered Jan 29 12:41:25.063304 kernel: acpiphp: Slot [20] registered Jan 29 12:41:25.063312 kernel: acpiphp: Slot [21] registered Jan 29 12:41:25.063321 kernel: acpiphp: Slot [22] registered Jan 29 12:41:25.063330 kernel: acpiphp: Slot [23] registered Jan 29 12:41:25.063338 kernel: acpiphp: Slot [24] registered Jan 29 12:41:25.063347 kernel: acpiphp: Slot [25] registered Jan 29 12:41:25.063355 kernel: acpiphp: Slot [26] registered Jan 29 12:41:25.063369 kernel: acpiphp: Slot [27] registered Jan 29 12:41:25.063378 kernel: acpiphp: Slot [28] registered Jan 29 12:41:25.063386 kernel: acpiphp: Slot [29] registered Jan 29 12:41:25.063395 kernel: acpiphp: Slot [30] registered Jan 29 12:41:25.063403 kernel: acpiphp: Slot [31] registered Jan 29 12:41:25.063412 kernel: PCI host bridge to bus 0000:00 Jan 29 12:41:25.063535 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:41:25.063623 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:41:25.063730 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:41:25.063814 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:41:25.063897 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 29 12:41:25.063980 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:41:25.064087 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:41:25.064187 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 12:41:25.064342 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 12:41:25.064439 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 29 12:41:25.064532 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 12:41:25.064624 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 12:41:25.064719 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 12:41:25.064810 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 12:41:25.064910 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 12:41:25.065011 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 12:41:25.065102 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 12:41:25.065201 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 12:41:25.065317 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 12:41:25.065430 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 29 12:41:25.065551 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 29 12:41:25.065680 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 29 12:41:25.065787 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:41:25.065898 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:41:25.066002 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 29 12:41:25.066123 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 29 12:41:25.066283 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 29 12:41:25.066415 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 29 12:41:25.066521 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:41:25.066623 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:41:25.066715 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 29 12:41:25.066806 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 29 12:41:25.066905 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 12:41:25.066999 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 29 12:41:25.067090 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 29 12:41:25.067188 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:41:25.067399 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 29 12:41:25.067491 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 29 12:41:25.067595 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 29 12:41:25.067610 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:41:25.067620 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:41:25.067629 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:41:25.067638 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:41:25.067647 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:41:25.067660 kernel: iommu: Default domain type: Translated Jan 29 12:41:25.067669 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:41:25.067678 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:41:25.067687 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:41:25.067696 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:41:25.067704 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 29 12:41:25.067795 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 12:41:25.067885 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 12:41:25.067981 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:41:25.067995 kernel: vgaarb: loaded Jan 29 12:41:25.068004 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:41:25.068012 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:41:25.068021 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:41:25.068030 kernel: pnp: PnP ACPI init Jan 29 12:41:25.068125 kernel: pnp 00:03: [dma 2] Jan 29 12:41:25.068140 kernel: pnp: PnP ACPI: found 5 devices Jan 29 12:41:25.068149 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:41:25.068161 kernel: NET: Registered PF_INET protocol family Jan 29 12:41:25.068170 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:41:25.068179 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:41:25.068188 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:41:25.068197 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:41:25.068206 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:41:25.068215 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:41:25.068224 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:41:25.068252 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:41:25.068261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:41:25.068270 kernel: NET: Registered PF_XDP protocol family Jan 29 12:41:25.068356 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:41:25.068437 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:41:25.068517 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:41:25.068597 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 29 12:41:25.068689 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 29 12:41:25.068800 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 12:41:25.068901 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:41:25.068915 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:41:25.068924 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:41:25.068933 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 29 12:41:25.068942 kernel: Initialise system trusted keyrings Jan 29 12:41:25.068951 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:41:25.068960 kernel: Key type asymmetric registered Jan 29 12:41:25.068969 kernel: Asymmetric key parser 'x509' registered Jan 29 12:41:25.068981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:41:25.068990 kernel: io scheduler mq-deadline registered Jan 29 12:41:25.068999 kernel: io scheduler kyber registered Jan 29 12:41:25.069008 kernel: io scheduler bfq registered Jan 29 12:41:25.069017 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:41:25.069027 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 12:41:25.069036 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 12:41:25.069045 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 12:41:25.069054 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 12:41:25.069064 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:41:25.069073 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:41:25.069082 kernel: random: crng init done Jan 29 12:41:25.069091 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:41:25.069100 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:41:25.069108 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:41:25.069199 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:41:25.069214 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:41:25.069316 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:41:25.069448 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:41:24 UTC (1738154484) Jan 29 12:41:25.069536 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 12:41:25.069549 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:41:25.069558 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:41:25.069567 kernel: Segment Routing with IPv6 Jan 29 12:41:25.069576 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:41:25.069585 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:41:25.069594 kernel: Key type dns_resolver registered Jan 29 12:41:25.069606 kernel: IPI shorthand broadcast: enabled Jan 29 12:41:25.069615 kernel: sched_clock: Marking stable (1043007601, 170561506)->(1254265875, -40696768) Jan 29 12:41:25.069624 kernel: registered taskstats version 1 Jan 29 12:41:25.069633 kernel: Loading compiled-in X.509 certificates Jan 29 12:41:25.069642 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:41:25.069651 kernel: Key type .fscrypt registered Jan 29 12:41:25.069659 kernel: Key type fscrypt-provisioning registered Jan 29 12:41:25.069668 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:41:25.069677 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:41:25.069688 kernel: ima: No architecture policies found Jan 29 12:41:25.069696 kernel: clk: Disabling unused clocks Jan 29 12:41:25.069705 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:41:25.069714 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:41:25.069723 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:41:25.069732 kernel: Run /init as init process Jan 29 12:41:25.069740 kernel: with arguments: Jan 29 12:41:25.069749 kernel: /init Jan 29 12:41:25.069757 kernel: with environment: Jan 29 12:41:25.069768 kernel: HOME=/ Jan 29 12:41:25.069776 kernel: TERM=linux Jan 29 12:41:25.069788 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:41:25.069805 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:41:25.069824 systemd[1]: Detected virtualization kvm. Jan 29 12:41:25.069840 systemd[1]: Detected architecture x86-64. Jan 29 12:41:25.069850 systemd[1]: Running in initrd. Jan 29 12:41:25.069862 systemd[1]: No hostname configured, using default hostname. Jan 29 12:41:25.069871 systemd[1]: Hostname set to <localhost>. Jan 29 12:41:25.069881 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:41:25.069891 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:41:25.069900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:41:25.069910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:41:25.069920 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:41:25.069939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:41:25.069951 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:41:25.069961 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:41:25.069973 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:41:25.069983 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:41:25.069995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:41:25.070005 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:41:25.070015 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:41:25.070025 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:41:25.070034 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:41:25.070044 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:41:25.070054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:41:25.070063 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:41:25.070073 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:41:25.070085 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:41:25.070095 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:41:25.070105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:41:25.070115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:41:25.070125 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:41:25.070134 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:41:25.070144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:41:25.070154 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:41:25.070164 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:41:25.070176 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:41:25.070186 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:41:25.070217 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 12:41:25.070266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:41:25.070281 systemd-journald[184]: Journal started Jan 29 12:41:25.070304 systemd-journald[184]: Runtime Journal (/run/log/journal/915d3bc888794423b76aa0cff75d46ac) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:41:25.081250 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:41:25.083064 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:41:25.088558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:41:25.089205 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:41:25.097366 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 12:41:25.144015 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:41:25.144041 kernel: Bridge firewalling registered Jan 29 12:41:25.101492 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:41:25.125993 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 12:41:25.150358 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:41:25.151894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:41:25.153346 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:41:25.154780 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:41:25.164380 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:41:25.165628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:41:25.169778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:41:25.171274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:41:25.183067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:41:25.187437 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:41:25.194500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:41:25.197514 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:41:25.206378 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:41:25.223184 dracut-cmdline[222]: dracut-dracut-053 Jan 29 12:41:25.224053 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:41:25.245419 systemd-resolved[219]: Positive Trust Anchors: Jan 29 12:41:25.246157 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:41:25.246693 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:41:25.252952 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 29 12:41:25.254298 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:41:25.255045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:41:25.306365 kernel: SCSI subsystem initialized Jan 29 12:41:25.317285 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:41:25.331280 kernel: iscsi: registered transport (tcp) Jan 29 12:41:25.355788 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:41:25.355885 kernel: QLogic iSCSI HBA Driver Jan 29 12:41:25.411971 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:41:25.418397 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:41:25.448917 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:41:25.449035 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:41:25.449071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:41:25.500320 kernel: raid6: sse2x4 gen() 12748 MB/s Jan 29 12:41:25.518294 kernel: raid6: sse2x2 gen() 13776 MB/s Jan 29 12:41:25.536743 kernel: raid6: sse2x1 gen() 9323 MB/s Jan 29 12:41:25.536842 kernel: raid6: using algorithm sse2x2 gen() 13776 MB/s Jan 29 12:41:25.556050 kernel: raid6: .... xor() 8441 MB/s, rmw enabled Jan 29 12:41:25.556191 kernel: raid6: using ssse3x2 recovery algorithm Jan 29 12:41:25.581317 kernel: xor: measuring software checksum speed Jan 29 12:41:25.581456 kernel: prefetch64-sse : 17096 MB/sec Jan 29 12:41:25.585135 kernel: generic_sse : 13709 MB/sec Jan 29 12:41:25.585206 kernel: xor: using function: prefetch64-sse (17096 MB/sec) Jan 29 12:41:25.787291 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:41:25.806266 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:41:25.813534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:41:25.859343 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jan 29 12:41:25.871573 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:41:25.881736 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:41:25.917953 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 29 12:41:25.976915 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:41:25.985585 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:41:26.063008 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:41:26.071677 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:41:26.097691 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:41:26.100814 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:41:26.102422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:41:26.102938 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:41:26.110375 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:41:26.142879 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:41:26.172020 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 29 12:41:26.227395 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 29 12:41:26.227545 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:41:26.227561 kernel: GPT:17805311 != 20971519 Jan 29 12:41:26.227574 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:41:26.227586 kernel: GPT:17805311 != 20971519 Jan 29 12:41:26.227598 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:41:26.227610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:41:26.227622 kernel: libata version 3.00 loaded. Jan 29 12:41:26.227638 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 12:41:26.227784 kernel: scsi host0: ata_piix Jan 29 12:41:26.227916 kernel: scsi host1: ata_piix Jan 29 12:41:26.228039 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 29 12:41:26.228053 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 29 12:41:26.210202 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:41:26.210363 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:41:26.211037 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:41:26.211557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:41:26.211677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:41:26.212299 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:41:26.223261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:41:26.283372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:41:26.289370 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:41:26.304461 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:41:26.421320 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (481) Jan 29 12:41:26.431996 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jan 29 12:41:26.463554 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:41:26.469346 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:41:26.475167 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:41:26.480098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:41:26.480731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:41:26.491436 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:41:26.505393 disk-uuid[516]: Primary Header is updated. Jan 29 12:41:26.505393 disk-uuid[516]: Secondary Entries is updated. Jan 29 12:41:26.505393 disk-uuid[516]: Secondary Header is updated. Jan 29 12:41:26.515248 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:41:26.521951 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:41:27.538632 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:41:27.538712 disk-uuid[517]: The operation has completed successfully. Jan 29 12:41:27.596492 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:41:27.596776 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:41:27.639387 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:41:27.658024 sh[530]: Success Jan 29 12:41:27.684288 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 29 12:41:27.750983 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:41:27.760803 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:41:27.763965 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:41:27.787944 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:41:27.788021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:41:27.792903 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:41:27.797662 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:41:27.801379 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:41:27.823223 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:41:27.825723 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:41:27.835549 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:41:27.847752 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:41:27.886042 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:41:27.886135 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:41:27.886165 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:41:27.898341 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:41:27.919864 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:41:27.925856 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:41:27.942484 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:41:27.949458 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:41:28.006492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:41:28.014495 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:41:28.047017 systemd-networkd[712]: lo: Link UP Jan 29 12:41:28.047813 systemd-networkd[712]: lo: Gained carrier Jan 29 12:41:28.049308 systemd-networkd[712]: Enumeration completed Jan 29 12:41:28.049437 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:41:28.050033 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:41:28.050037 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:41:28.052463 systemd-networkd[712]: eth0: Link UP Jan 29 12:41:28.052466 systemd-networkd[712]: eth0: Gained carrier Jan 29 12:41:28.052478 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:41:28.055404 systemd[1]: Reached target network.target - Network. Jan 29 12:41:28.069740 systemd-networkd[712]: eth0: DHCPv4 address 172.24.4.118/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:41:28.104948 ignition[661]: Ignition 2.19.0 Jan 29 12:41:28.104966 ignition[661]: Stage: fetch-offline Jan 29 12:41:28.105015 ignition[661]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:41:28.105027 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:41:28.107393 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:41:28.105163 ignition[661]: parsed url from cmdline: "" Jan 29 12:41:28.105168 ignition[661]: no config URL provided Jan 29 12:41:28.105174 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:41:28.105185 ignition[661]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:41:28.105193 ignition[661]: failed to fetch config: resource requires networking Jan 29 12:41:28.106063 ignition[661]: Ignition finished successfully Jan 29 12:41:28.118514 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:41:28.132997 ignition[721]: Ignition 2.19.0 Jan 29 12:41:28.133011 ignition[721]: Stage: fetch Jan 29 12:41:28.133219 ignition[721]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:41:28.133258 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:41:28.133370 ignition[721]: parsed url from cmdline: "" Jan 29 12:41:28.133374 ignition[721]: no config URL provided Jan 29 12:41:28.133381 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:41:28.133441 ignition[721]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:41:28.133572 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 12:41:28.133618 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 12:41:28.133656 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 12:41:28.362678 systemd-resolved[219]: Detected conflict on linux IN A 172.24.4.118 Jan 29 12:41:28.362691 systemd-resolved[219]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jan 29 12:41:28.384472 ignition[721]: GET result: OK Jan 29 12:41:28.384708 ignition[721]: parsing config with SHA512: 0a13a20b7f360ab751bb59ce0f32adc0d7001b11790b1590cc38d297a87e955223c8609cfc97e557638ec64329217244d80ce855fb970dbb7e1ad8dac842793e Jan 29 12:41:28.397421 unknown[721]: fetched base config from "system" Jan 29 12:41:28.397462 unknown[721]: fetched base config from "system" Jan 29 12:41:28.399077 ignition[721]: fetch: fetch complete Jan 29 12:41:28.397484 unknown[721]: fetched user config from "openstack" Jan 29 12:41:28.399095 ignition[721]: fetch: fetch passed Jan 29 12:41:28.403709 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:41:28.399214 ignition[721]: Ignition finished successfully Jan 29 12:41:28.414612 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:41:28.462977 ignition[727]: Ignition 2.19.0 Jan 29 12:41:28.463004 ignition[727]: Stage: kargs Jan 29 12:41:28.463655 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:41:28.463692 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:41:28.467532 ignition[727]: kargs: kargs passed Jan 29 12:41:28.470688 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:41:28.467666 ignition[727]: Ignition finished successfully Jan 29 12:41:28.482695 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:41:28.512389 ignition[733]: Ignition 2.19.0 Jan 29 12:41:28.512405 ignition[733]: Stage: disks Jan 29 12:41:28.512649 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:41:28.512665 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:41:28.514155 ignition[733]: disks: disks passed Jan 29 12:41:28.515453 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:41:28.514215 ignition[733]: Ignition finished successfully Jan 29 12:41:28.516976 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:41:28.517736 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:41:28.519030 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:41:28.520180 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:41:28.521595 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:41:28.528412 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:41:28.553491 systemd-fsck[741]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:41:28.564595 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:41:28.572488 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:41:28.717589 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:41:28.719213 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:41:28.721545 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:41:28.753440 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:41:28.771417 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:41:28.773179 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:41:28.776556 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 12:41:28.781066 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:41:28.781131 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:41:28.795435 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:41:28.805574 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:41:28.828303 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (749) Jan 29 12:41:28.859323 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:41:28.859430 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:41:28.863302 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:41:28.911801 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:41:28.923052 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:41:29.242711 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:41:29.257665 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:41:29.270090 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:41:29.281379 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:41:29.447391 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:41:29.464472 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:41:29.469824 systemd-networkd[712]: eth0: Gained IPv6LL Jan 29 12:41:29.470465 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:41:29.477829 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:41:29.483294 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:41:29.512802 ignition[866]: INFO : Ignition 2.19.0 Jan 29 12:41:29.515000 ignition[866]: INFO : Stage: mount Jan 29 12:41:29.515000 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:41:29.515000 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:41:29.518802 ignition[866]: INFO : mount: mount passed Jan 29 12:41:29.519499 ignition[866]: INFO : Ignition finished successfully Jan 29 12:41:29.522047 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:41:29.547373 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:41:36.372847 coreos-metadata[751]: Jan 29 12:41:36.372 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:41:36.414651 coreos-metadata[751]: Jan 29 12:41:36.414 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:41:36.430453 coreos-metadata[751]: Jan 29 12:41:36.430 INFO Fetch successful Jan 29 12:41:36.432060 coreos-metadata[751]: Jan 29 12:41:36.430 INFO wrote hostname ci-4081-3-0-e-97e17aa81b.novalocal to /sysroot/etc/hostname Jan 29 12:41:36.436565 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 12:41:36.436816 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 12:41:36.448446 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:41:36.481577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:41:36.501291 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (883) Jan 29 12:41:36.510297 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:41:36.510395 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:41:36.514105 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:41:36.526332 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:41:36.532054 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:41:36.575169 ignition[901]: INFO : Ignition 2.19.0 Jan 29 12:41:36.575169 ignition[901]: INFO : Stage: files Jan 29 12:41:36.578149 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:41:36.578149 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:41:36.578149 ignition[901]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:41:36.584969 ignition[901]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:41:36.584969 ignition[901]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:41:36.591393 ignition[901]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:41:36.594200 ignition[901]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:41:36.596118 ignition[901]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:41:36.594440 unknown[901]: wrote ssh authorized keys file for user: core Jan 29 12:41:36.599692 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:41:36.599692 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:41:37.208396 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:41:39.522125 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:41:39.522125 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:41:39.527666 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 12:41:40.109928 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:41:41.133228 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:41:41.133228 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:41:41.138507 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:41:41.477961 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:41:43.356050 ignition[901]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:41:43.356050 ignition[901]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 12:41:43.360324 ignition[901]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:41:43.360324 ignition[901]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:41:43.360324 ignition[901]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 12:41:43.360324 ignition[901]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:41:43.360324 ignition[901]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:41:43.360324 ignition[901]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:41:43.360324 ignition[901]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:41:43.360324 ignition[901]: INFO : files: files passed Jan 29 12:41:43.360324 ignition[901]: INFO : Ignition finished successfully Jan 29 12:41:43.362005 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:41:43.373507 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:41:43.378542 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:41:43.384376 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:41:43.389554 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:41:43.408329 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:41:43.412105 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:41:43.412105 initrd-setup-root-after-ignition[930]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:41:43.412194 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:41:43.414275 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:41:43.429473 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:41:43.486974 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:41:43.487177 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:41:43.490728 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:41:43.492965 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:41:43.495826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:41:43.507540 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:41:43.536026 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:41:43.554497 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:41:43.577196 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:41:43.578987 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:41:43.582395 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:41:43.585594 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:41:43.585890 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:41:43.589291 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:41:43.591300 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:41:43.594392 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:41:43.597153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:41:43.599953 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:41:43.603085 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:41:43.606079 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:41:43.609312 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:41:43.612435 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:41:43.615558 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:41:43.618443 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:41:43.618716 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:41:43.622074 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:41:43.624084 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:41:43.626846 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:41:43.628435 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:41:43.630079 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:41:43.630403 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:41:43.634804 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:41:43.635111 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:41:43.637143 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:41:43.637511 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:41:43.648763 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:41:43.650320 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:41:43.650748 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:41:43.661484 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:41:43.662808 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:41:43.663110 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:41:43.670895 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:41:43.672992 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:41:43.681487 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:41:43.682174 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:41:43.691930 ignition[954]: INFO : Ignition 2.19.0 Jan 29 12:41:43.691930 ignition[954]: INFO : Stage: umount Jan 29 12:41:43.691930 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:41:43.691930 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 12:41:43.691930 ignition[954]: INFO : umount: umount passed Jan 29 12:41:43.691930 ignition[954]: INFO : Ignition finished successfully Jan 29 12:41:43.694802 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:41:43.694897 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:41:43.696141 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:41:43.696255 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:41:43.696813 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:41:43.696856 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:41:43.698357 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:41:43.698398 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:41:43.699133 systemd[1]: Stopped target network.target - Network. Jan 29 12:41:43.699601 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:41:43.699646 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:41:43.700197 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:41:43.703388 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:41:43.708646 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:41:43.709442 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:41:43.709871 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:41:43.710369 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:41:43.710405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:41:43.711647 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:41:43.711679 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:41:43.712770 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:41:43.712840 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:41:43.714117 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:41:43.714172 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:41:43.715352 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:41:43.716624 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:41:43.719005 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:41:43.719825 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:41:43.719956 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:41:43.720371 systemd-networkd[712]: eth0: DHCPv6 lease lost Jan 29 12:41:43.722007 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:41:43.722145 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:41:43.723722 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:41:43.723776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:41:43.725023 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:41:43.725075 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:41:43.732411 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:41:43.736355 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:41:43.736422 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:41:43.737748 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:41:43.739077 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:41:43.739209 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:41:43.747189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:41:43.747273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:41:43.747918 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:41:43.747964 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:41:43.748683 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:41:43.748728 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:41:43.750595 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:41:43.750724 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:41:43.751876 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:41:43.751958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:41:43.753569 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:41:43.753631 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:41:43.754724 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:41:43.754757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:41:43.755891 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:41:43.755936 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:41:43.757446 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:41:43.757488 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:41:43.758440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:41:43.758483 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:41:43.766460 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:41:43.767016 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:41:43.767073 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:41:43.770638 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:41:43.770693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:41:43.772222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:41:43.772336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:41:43.773965 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:41:43.783378 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:41:43.791009 systemd[1]: Switching root. Jan 29 12:41:43.819632 systemd-journald[184]: Journal stopped Jan 29 12:41:46.218949 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 12:41:46.219016 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:41:46.219035 kernel: SELinux: policy capability open_perms=1 Jan 29 12:41:46.219047 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:41:46.219059 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:41:46.219074 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:41:46.219086 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:41:46.219098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:41:46.219109 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:41:46.219125 kernel: audit: type=1403 audit(1738154505.204:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:41:46.219138 systemd[1]: Successfully loaded SELinux policy in 86.058ms. Jan 29 12:41:46.219163 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.379ms. Jan 29 12:41:46.219177 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:41:46.219193 systemd[1]: Detected virtualization kvm. Jan 29 12:41:46.219207 systemd[1]: Detected architecture x86-64. Jan 29 12:41:46.219222 systemd[1]: Detected first boot. Jan 29 12:41:46.219251 systemd[1]: Hostname set to <ci-4081-3-0-e-97e17aa81b.novalocal>. Jan 29 12:41:46.219265 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:41:46.219278 zram_generator::config[996]: No configuration found. Jan 29 12:41:46.219296 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:41:46.219312 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:41:46.219325 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:41:46.219338 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:41:46.219352 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:41:46.219364 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:41:46.219377 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:41:46.219390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:41:46.219403 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:41:46.219415 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:41:46.219430 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:41:46.219443 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:41:46.219455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:41:46.219469 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:41:46.219482 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:41:46.219495 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:41:46.219510 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:41:46.219522 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:41:46.219533 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:41:46.219547 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:41:46.219559 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:41:46.219575 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:41:46.219587 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:41:46.219599 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:41:46.219613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:41:46.219625 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:41:46.219637 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:41:46.219649 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:41:46.219661 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:41:46.219673 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:41:46.219685 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:41:46.219697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:41:46.219709 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:41:46.219720 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:41:46.219736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:41:46.219748 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:41:46.219759 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:41:46.219772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:41:46.219784 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:41:46.219796 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:41:46.219808 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:41:46.219820 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:41:46.219834 systemd[1]: Reached target machines.target - Containers. Jan 29 12:41:46.219845 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:41:46.219857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:41:46.219869 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:41:46.219881 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:41:46.219893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:41:46.219905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:41:46.219917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:41:46.219929 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:41:46.219943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:41:46.219955 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:41:46.219967 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:41:46.219979 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:41:46.219992 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:41:46.220004 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:41:46.220015 kernel: fuse: init (API version 7.39) Jan 29 12:41:46.220026 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:41:46.220038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:41:46.220051 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:41:46.220063 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:41:46.220075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:41:46.220088 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:41:46.220099 systemd[1]: Stopped verity-setup.service. Jan 29 12:41:46.220112 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:41:46.220124 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:41:46.220136 kernel: loop: module loaded Jan 29 12:41:46.220149 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:41:46.220161 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:41:46.220172 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:41:46.220184 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:41:46.220196 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:41:46.220208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:41:46.220222 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:41:46.222074 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:41:46.222094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:41:46.222107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:41:46.222140 systemd-journald[1089]: Collecting audit messages is disabled. Jan 29 12:41:46.222165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:41:46.222178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:41:46.222196 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:41:46.222210 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:41:46.222223 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:41:46.222290 systemd-journald[1089]: Journal started Jan 29 12:41:46.222321 systemd-journald[1089]: Runtime Journal (/run/log/journal/915d3bc888794423b76aa0cff75d46ac) is 8.0M, max 78.3M, 70.3M free. Jan 29 12:41:45.862200 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:41:45.883370 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:41:45.883721 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:41:46.226264 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:41:46.227568 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:41:46.227783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:41:46.228640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:41:46.229552 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:41:46.230414 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:41:46.231306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:41:46.240219 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:41:46.246613 kernel: ACPI: bus type drm_connector registered Jan 29 12:41:46.247376 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:41:46.252359 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:41:46.253025 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:41:46.253065 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:41:46.254874 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:41:46.258408 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:41:46.269513 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:41:46.270261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:41:46.272341 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:41:46.276470 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:41:46.277320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:41:46.284295 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:41:46.284937 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:41:46.290173 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:41:46.295001 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:41:46.301455 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:41:46.311291 systemd-journald[1089]: Time spent on flushing to /var/log/journal/915d3bc888794423b76aa0cff75d46ac is 29.868ms for 946 entries. Jan 29 12:41:46.311291 systemd-journald[1089]: System Journal (/var/log/journal/915d3bc888794423b76aa0cff75d46ac) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:41:46.364398 systemd-journald[1089]: Received client request to flush runtime journal. Jan 29 12:41:46.309926 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:41:46.314423 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:41:46.378852 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 12:41:46.314581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:41:46.315502 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:41:46.316103 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:41:46.316864 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:41:46.335985 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:41:46.336835 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:41:46.346529 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:41:46.371607 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:41:46.380215 udevadm[1132]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:41:46.408290 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:41:46.415490 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:41:46.418943 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:41:46.444277 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:41:46.442671 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:41:46.450181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:41:46.477884 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 29 12:41:46.478227 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 29 12:41:46.483265 kernel: loop1: detected capacity change from 0 to 142488 Jan 29 12:41:46.486756 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:41:46.565581 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 12:41:46.669763 kernel: loop3: detected capacity change from 0 to 8 Jan 29 12:41:46.720639 kernel: loop4: detected capacity change from 0 to 210664 Jan 29 12:41:46.828309 kernel: loop5: detected capacity change from 0 to 142488 Jan 29 12:41:46.879624 kernel: loop6: detected capacity change from 0 to 140768 Jan 29 12:41:46.953191 kernel: loop7: detected capacity change from 0 to 8 Jan 29 12:41:46.951681 (sd-merge)[1156]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 12:41:46.952109 (sd-merge)[1156]: Merged extensions into '/usr'. Jan 29 12:41:46.957591 systemd[1]: Reloading requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:41:46.957691 systemd[1]: Reloading... Jan 29 12:41:47.074273 zram_generator::config[1181]: No configuration found. Jan 29 12:41:47.282464 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:41:47.344868 systemd[1]: Reloading finished in 386 ms. Jan 29 12:41:47.375896 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:41:47.376920 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:41:47.385437 systemd[1]: Starting ensure-sysext.service... Jan 29 12:41:47.388383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:41:47.391445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:41:47.409364 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:41:47.409381 systemd[1]: Reloading... Jan 29 12:41:47.440226 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jan 29 12:41:47.444165 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:41:47.444647 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:41:47.445756 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:41:47.446110 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 29 12:41:47.446188 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 29 12:41:47.480255 zram_generator::config[1268]: No configuration found. Jan 29 12:41:47.486407 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:41:47.486422 systemd-tmpfiles[1239]: Skipping /boot Jan 29 12:41:47.497293 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:41:47.497304 systemd-tmpfiles[1239]: Skipping /boot Jan 29 12:41:47.648030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:41:47.707735 systemd[1]: Reloading finished in 298 ms. Jan 29 12:41:47.736921 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:41:47.746411 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:41:47.771226 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:41:47.774451 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:41:47.778421 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:41:47.788356 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:41:47.795823 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:41:47.796820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:41:47.806875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:41:47.812776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:41:47.819543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:41:47.820220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:41:47.820383 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:41:47.821525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:41:47.821729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:41:47.822950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:41:47.823179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:41:47.830101 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:41:47.831440 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:41:47.831669 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:41:47.839176 systemd[1]: Finished ensure-sysext.service. Jan 29 12:41:47.846977 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:41:47.847395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:41:47.856620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:41:47.867624 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:41:47.873434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:41:47.876489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:41:47.877156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:41:47.881888 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:41:47.891163 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:41:47.892294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:41:47.892518 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:41:47.919927 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:41:47.922208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:41:47.922373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:41:47.930923 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:41:47.940598 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:41:47.940753 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:41:47.954408 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:41:47.954577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:41:47.955937 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:41:47.958441 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:41:47.963886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:41:47.964063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:41:47.964896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:41:47.976222 augenrules[1384]: No rules Jan 29 12:41:47.979587 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:41:47.983390 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:41:47.992460 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:41:48.009340 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:41:48.022654 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:41:48.023491 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:41:48.027328 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:41:48.047627 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:41:48.095261 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1366) Jan 29 12:41:48.162285 systemd-networkd[1360]: lo: Link UP Jan 29 12:41:48.162585 systemd-networkd[1360]: lo: Gained carrier Jan 29 12:41:48.163949 systemd-networkd[1360]: Enumeration completed Jan 29 12:41:48.165397 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:41:48.165855 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:41:48.165863 systemd-networkd[1360]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:41:48.166551 systemd-networkd[1360]: eth0: Link UP Jan 29 12:41:48.166555 systemd-networkd[1360]: eth0: Gained carrier Jan 29 12:41:48.166570 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:41:48.171380 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:41:48.181297 systemd-networkd[1360]: eth0: DHCPv4 address 172.24.4.118/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 29 12:41:48.186014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:41:48.187910 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:41:48.189508 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:41:48.195433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:41:48.211270 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 12:41:48.213771 systemd-resolved[1330]: Positive Trust Anchors: Jan 29 12:41:48.213789 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:41:48.213833 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:41:48.220978 systemd-resolved[1330]: Using system hostname 'ci-4081-3-0-e-97e17aa81b.novalocal'. Jan 29 12:41:48.221258 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 12:41:48.222810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:41:48.223459 systemd[1]: Reached target network.target - Network. Jan 29 12:41:48.223925 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:41:48.227341 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:41:48.229743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:41:48.231975 systemd-timesyncd[1353]: Contacted time server 82.64.230.205:123 (0.flatcar.pool.ntp.org). Jan 29 12:41:48.232031 systemd-timesyncd[1353]: Initial clock synchronization to Wed 2025-01-29 12:41:48.234785 UTC. Jan 29 12:41:48.255258 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 12:41:48.277263 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:41:48.291481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:41:48.304326 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 12:41:48.304398 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 12:41:48.310287 kernel: Console: switching to colour dummy device 80x25 Jan 29 12:41:48.310354 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 12:41:48.310369 kernel: [drm] features: -context_init Jan 29 12:41:48.315665 kernel: [drm] number of scanouts: 1 Jan 29 12:41:48.315721 kernel: [drm] number of cap sets: 0 Jan 29 12:41:48.315929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:41:48.316198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:41:48.318100 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 12:41:48.326328 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 12:41:48.326426 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 12:41:48.329194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:41:48.332282 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 12:41:48.339065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:41:48.339388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:41:48.349507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:41:48.351437 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:41:48.360015 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:41:48.380216 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:41:48.405901 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:41:48.406416 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:41:48.413476 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:41:48.431302 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:41:48.455961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:41:48.458952 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:41:48.461640 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:41:48.461896 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:41:48.462378 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:41:48.462694 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:41:48.462840 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:41:48.462950 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:41:48.462997 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:41:48.463098 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:41:48.464575 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:41:48.468451 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:41:48.475867 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:41:48.478535 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:41:48.478812 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:41:48.479766 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:41:48.482406 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:41:48.484569 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:41:48.484630 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:41:48.490382 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:41:48.495453 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:41:48.502407 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:41:48.511409 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:41:48.521468 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:41:48.522136 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:41:48.524838 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:41:48.532746 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:41:48.542227 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:41:48.549533 jq[1440]: false Jan 29 12:41:48.549256 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:41:48.565115 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:41:48.566124 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:41:48.566650 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:41:48.568205 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:41:48.580361 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:41:48.585660 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:41:48.585838 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:41:48.591165 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:41:48.591383 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:41:48.591903 dbus-daemon[1439]: [system] SELinux support is enabled Jan 29 12:41:48.598359 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:41:48.601038 extend-filesystems[1443]: Found loop4 Jan 29 12:41:48.601038 extend-filesystems[1443]: Found loop5 Jan 29 12:41:48.601038 extend-filesystems[1443]: Found loop6 Jan 29 12:41:48.601038 extend-filesystems[1443]: Found loop7 Jan 29 12:41:48.601038 extend-filesystems[1443]: Found vda Jan 29 12:41:48.601038 extend-filesystems[1443]: Found vda1 Jan 29 12:41:48.601038 extend-filesystems[1443]: Found vda2 Jan 29 12:41:48.601038 extend-filesystems[1443]: Found vda3 Jan 29 12:41:48.601038 extend-filesystems[1443]: Found usr Jan 29 12:41:48.601038 extend-filesystems[1443]: Found vda4 Jan 29 12:41:48.627311 extend-filesystems[1443]: Found vda6 Jan 29 12:41:48.627311 extend-filesystems[1443]: Found vda7 Jan 29 12:41:48.627311 extend-filesystems[1443]: Found vda9 Jan 29 12:41:48.627311 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 29 12:41:48.631057 jq[1451]: true Jan 29 12:41:48.606289 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:41:48.606342 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:41:48.620425 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:41:48.620448 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:41:48.650283 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 29 12:41:48.656848 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:41:48.657802 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:41:48.671600 update_engine[1450]: I20250129 12:41:48.661674 1450 main.cc:92] Flatcar Update Engine starting Jan 29 12:41:48.667494 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:41:48.668387 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:41:48.680260 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 29 12:41:48.682507 jq[1464]: true Jan 29 12:41:48.691257 update_engine[1450]: I20250129 12:41:48.690725 1450 update_check_scheduler.cc:74] Next update check in 4m14s Jan 29 12:41:48.691524 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:41:48.706096 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:41:48.716416 tar[1454]: linux-amd64/helm Jan 29 12:41:48.720283 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 29 12:41:48.791804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1366) Jan 29 12:41:48.797779 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:41:48.797779 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:41:48.797779 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 29 12:41:48.815519 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 29 12:41:48.804540 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:41:48.804974 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:41:48.834455 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:41:48.830302 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:41:48.848425 systemd[1]: Starting sshkeys.service... Jan 29 12:41:48.854976 systemd-logind[1448]: New seat seat0. Jan 29 12:41:48.858514 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:41:48.858539 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:41:48.858709 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:41:48.872545 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:41:48.884875 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:41:49.054317 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:41:49.180245 containerd[1470]: time="2025-01-29T12:41:49.176987933Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:41:49.226025 containerd[1470]: time="2025-01-29T12:41:49.225964645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:41:49.234722 containerd[1470]: time="2025-01-29T12:41:49.234676239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:41:49.234765 containerd[1470]: time="2025-01-29T12:41:49.234720850Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:41:49.234765 containerd[1470]: time="2025-01-29T12:41:49.234741643Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:41:49.234929 containerd[1470]: time="2025-01-29T12:41:49.234904528Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:41:49.234972 containerd[1470]: time="2025-01-29T12:41:49.234935291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235033 containerd[1470]: time="2025-01-29T12:41:49.235007269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235062 containerd[1470]: time="2025-01-29T12:41:49.235030998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235220 containerd[1470]: time="2025-01-29T12:41:49.235193411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235270 containerd[1470]: time="2025-01-29T12:41:49.235218664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235270 containerd[1470]: time="2025-01-29T12:41:49.235250800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235270 containerd[1470]: time="2025-01-29T12:41:49.235264117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235424 containerd[1470]: time="2025-01-29T12:41:49.235345104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235649 containerd[1470]: time="2025-01-29T12:41:49.235624849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235756 containerd[1470]: time="2025-01-29T12:41:49.235731498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:41:49.235784 containerd[1470]: time="2025-01-29T12:41:49.235753223Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:41:49.235863 containerd[1470]: time="2025-01-29T12:41:49.235841164Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:41:49.235924 containerd[1470]: time="2025-01-29T12:41:49.235903071Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:41:49.243738 containerd[1470]: time="2025-01-29T12:41:49.243703411Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:41:49.243815 containerd[1470]: time="2025-01-29T12:41:49.243773355Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:41:49.243815 containerd[1470]: time="2025-01-29T12:41:49.243794047Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:41:49.244253 containerd[1470]: time="2025-01-29T12:41:49.243852277Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:41:49.244253 containerd[1470]: time="2025-01-29T12:41:49.243881858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:41:49.244253 containerd[1470]: time="2025-01-29T12:41:49.244021766Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:41:49.244435 containerd[1470]: time="2025-01-29T12:41:49.244408431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:41:49.244564 containerd[1470]: time="2025-01-29T12:41:49.244540442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:41:49.244591 containerd[1470]: time="2025-01-29T12:41:49.244565163Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:41:49.244591 containerd[1470]: time="2025-01-29T12:41:49.244580945Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:41:49.244632 containerd[1470]: time="2025-01-29T12:41:49.244597279Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:41:49.244632 containerd[1470]: time="2025-01-29T12:41:49.244612100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:41:49.244632 containerd[1470]: time="2025-01-29T12:41:49.244625998Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:41:49.244699 containerd[1470]: time="2025-01-29T12:41:49.244641861Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:41:49.244699 containerd[1470]: time="2025-01-29T12:41:49.244658214Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:41:49.244699 containerd[1470]: time="2025-01-29T12:41:49.244673115Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:41:49.244699 containerd[1470]: time="2025-01-29T12:41:49.244687614Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:41:49.244783 containerd[1470]: time="2025-01-29T12:41:49.244702045Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:41:49.244783 containerd[1470]: time="2025-01-29T12:41:49.244733910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.244783 containerd[1470]: time="2025-01-29T12:41:49.244749873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.244783 containerd[1470]: time="2025-01-29T12:41:49.244769964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.244876 containerd[1470]: time="2025-01-29T12:41:49.244787710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.244876 containerd[1470]: time="2025-01-29T12:41:49.244803302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.244876 containerd[1470]: time="2025-01-29T12:41:49.244823134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.244876 containerd[1470]: time="2025-01-29T12:41:49.244838636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.244876 containerd[1470]: time="2025-01-29T12:41:49.244853285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.244876 containerd[1470]: time="2025-01-29T12:41:49.244868627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.245002 containerd[1470]: time="2025-01-29T12:41:49.244885341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.245002 containerd[1470]: time="2025-01-29T12:41:49.244899130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.245002 containerd[1470]: time="2025-01-29T12:41:49.244913639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.245002 containerd[1470]: time="2025-01-29T12:41:49.244928640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.245002 containerd[1470]: time="2025-01-29T12:41:49.244946026Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:41:49.245002 containerd[1470]: time="2025-01-29T12:41:49.244968162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.245002 containerd[1470]: time="2025-01-29T12:41:49.244982341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.245002 containerd[1470]: time="2025-01-29T12:41:49.244994917Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:41:49.245197 containerd[1470]: time="2025-01-29T12:41:49.245052384Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:41:49.245197 containerd[1470]: time="2025-01-29T12:41:49.245074730Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:41:49.245197 containerd[1470]: time="2025-01-29T12:41:49.245088369Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:41:49.245197 containerd[1470]: time="2025-01-29T12:41:49.245165077Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:41:49.245197 containerd[1470]: time="2025-01-29T12:41:49.245182593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.245197 containerd[1470]: time="2025-01-29T12:41:49.245196933Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:41:49.245347 containerd[1470]: time="2025-01-29T12:41:49.245209518Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:41:49.245347 containerd[1470]: time="2025-01-29T12:41:49.245257376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.245578607Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.245658701Z" level=info msg="Connect containerd service" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.245690637Z" level=info msg="using legacy CRI server" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.245698233Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.245783669Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.246398653Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.246642083Z" level=info msg="Start subscribing containerd event" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.246692056Z" level=info msg="Start recovering state" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.246746408Z" level=info msg="Start event monitor" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.246761970Z" level=info msg="Start snapshots syncer" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.246771039Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:41:49.247249 containerd[1470]: time="2025-01-29T12:41:49.246779296Z" level=info msg="Start streaming server" Jan 29 12:41:49.250243 containerd[1470]: time="2025-01-29T12:41:49.249088193Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:41:49.250243 containerd[1470]: time="2025-01-29T12:41:49.249208972Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:41:49.249406 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:41:49.253738 containerd[1470]: time="2025-01-29T12:41:49.253684154Z" level=info msg="containerd successfully booted in 0.077640s" Jan 29 12:41:49.382602 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:41:49.406607 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:41:49.421576 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:41:49.428555 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:41:49.429039 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:41:49.439651 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:41:49.450753 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:41:49.460678 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:41:49.473729 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:41:49.476682 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:41:49.535941 tar[1454]: linux-amd64/LICENSE Jan 29 12:41:49.536093 tar[1454]: linux-amd64/README.md Jan 29 12:41:49.548787 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:41:50.013623 systemd-networkd[1360]: eth0: Gained IPv6LL Jan 29 12:41:50.018943 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:41:50.023490 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:41:50.034817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:41:50.047340 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:41:50.098742 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:41:50.257057 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:41:50.272064 systemd[1]: Started sshd@0-172.24.4.118:22-172.24.4.1:37148.service - OpenSSH per-connection server daemon (172.24.4.1:37148). Jan 29 12:41:51.606359 sshd[1547]: Accepted publickey for core from 172.24.4.1 port 37148 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:41:51.611928 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:41:51.640659 systemd-logind[1448]: New session 1 of user core. Jan 29 12:41:51.646091 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:41:51.659625 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:41:51.691754 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:41:51.701595 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:41:51.715056 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:41:51.831304 systemd[1552]: Queued start job for default target default.target. Jan 29 12:41:51.838169 systemd[1552]: Created slice app.slice - User Application Slice. Jan 29 12:41:51.838193 systemd[1552]: Reached target paths.target - Paths. Jan 29 12:41:51.838207 systemd[1552]: Reached target timers.target - Timers. Jan 29 12:41:51.841350 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:41:51.851074 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:41:51.851130 systemd[1552]: Reached target sockets.target - Sockets. Jan 29 12:41:51.851145 systemd[1552]: Reached target basic.target - Basic System. Jan 29 12:41:51.851181 systemd[1552]: Reached target default.target - Main User Target. Jan 29 12:41:51.851209 systemd[1552]: Startup finished in 129ms. Jan 29 12:41:51.852128 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:41:51.861505 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:41:51.946540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:41:51.970860 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:41:52.340101 systemd[1]: Started sshd@1-172.24.4.118:22-172.24.4.1:37156.service - OpenSSH per-connection server daemon (172.24.4.1:37156). Jan 29 12:41:53.539836 kubelet[1567]: E0129 12:41:53.539715 1567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:41:53.544848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:41:53.545181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:41:53.545963 systemd[1]: kubelet.service: Consumed 2.313s CPU time. Jan 29 12:41:54.479581 sshd[1575]: Accepted publickey for core from 172.24.4.1 port 37156 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:41:54.485567 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:41:54.500638 systemd-logind[1448]: New session 2 of user core. Jan 29 12:41:54.511218 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:41:54.520011 login[1528]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:41:54.541290 systemd-logind[1448]: New session 3 of user core. Jan 29 12:41:54.551737 login[1529]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:41:54.554090 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:41:54.571682 systemd-logind[1448]: New session 4 of user core. Jan 29 12:41:54.581926 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:41:55.282119 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 29 12:41:55.300578 systemd[1]: sshd@1-172.24.4.118:22-172.24.4.1:37156.service: Deactivated successfully. Jan 29 12:41:55.304213 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:41:55.306433 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:41:55.315130 systemd[1]: Started sshd@2-172.24.4.118:22-172.24.4.1:55056.service - OpenSSH per-connection server daemon (172.24.4.1:55056). Jan 29 12:41:55.318939 systemd-logind[1448]: Removed session 2. Jan 29 12:41:55.614574 coreos-metadata[1438]: Jan 29 12:41:55.614 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:41:55.661721 coreos-metadata[1438]: Jan 29 12:41:55.661 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 12:41:55.856294 coreos-metadata[1438]: Jan 29 12:41:55.856 INFO Fetch successful Jan 29 12:41:55.856294 coreos-metadata[1438]: Jan 29 12:41:55.856 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 12:41:55.872274 coreos-metadata[1438]: Jan 29 12:41:55.872 INFO Fetch successful Jan 29 12:41:55.872274 coreos-metadata[1438]: Jan 29 12:41:55.872 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 12:41:55.888636 coreos-metadata[1438]: Jan 29 12:41:55.888 INFO Fetch successful Jan 29 12:41:55.888636 coreos-metadata[1438]: Jan 29 12:41:55.888 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 12:41:55.904368 coreos-metadata[1438]: Jan 29 12:41:55.904 INFO Fetch successful Jan 29 12:41:55.904368 coreos-metadata[1438]: Jan 29 12:41:55.904 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 12:41:55.919349 coreos-metadata[1438]: Jan 29 12:41:55.919 INFO Fetch successful Jan 29 12:41:55.919349 coreos-metadata[1438]: Jan 29 12:41:55.919 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 12:41:55.933578 coreos-metadata[1438]: Jan 29 12:41:55.933 INFO Fetch successful Jan 29 12:41:55.990670 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:41:55.991912 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:41:55.993784 coreos-metadata[1500]: Jan 29 12:41:55.993 WARN failed to locate config-drive, using the metadata service API instead Jan 29 12:41:56.036363 coreos-metadata[1500]: Jan 29 12:41:56.036 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 12:41:56.051180 coreos-metadata[1500]: Jan 29 12:41:56.051 INFO Fetch successful Jan 29 12:41:56.051180 coreos-metadata[1500]: Jan 29 12:41:56.051 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 12:41:56.065160 coreos-metadata[1500]: Jan 29 12:41:56.065 INFO Fetch successful Jan 29 12:41:56.071300 unknown[1500]: wrote ssh authorized keys file for user: core Jan 29 12:41:56.114007 update-ssh-keys[1620]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:41:56.115032 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:41:56.118225 systemd[1]: Finished sshkeys.service. Jan 29 12:41:56.123014 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:41:56.123352 systemd[1]: Startup finished in 1.264s (kernel) + 20.369s (initrd) + 11.004s (userspace) = 32.638s. Jan 29 12:41:56.537980 sshd[1609]: Accepted publickey for core from 172.24.4.1 port 55056 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:41:56.540620 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:41:56.550946 systemd-logind[1448]: New session 5 of user core. Jan 29 12:41:56.560576 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:41:57.180315 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 29 12:41:57.188295 systemd[1]: sshd@2-172.24.4.118:22-172.24.4.1:55056.service: Deactivated successfully. Jan 29 12:41:57.191882 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:41:57.193670 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:41:57.195889 systemd-logind[1448]: Removed session 5. Jan 29 12:42:03.796311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:42:03.807712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:04.130545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:04.145139 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:42:04.236406 kubelet[1636]: E0129 12:42:04.236332 1636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:42:04.244771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:42:04.244934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:42:07.209139 systemd[1]: Started sshd@3-172.24.4.118:22-172.24.4.1:50488.service - OpenSSH per-connection server daemon (172.24.4.1:50488). Jan 29 12:42:08.474348 sshd[1646]: Accepted publickey for core from 172.24.4.1 port 50488 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:42:08.477417 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:42:08.489067 systemd-logind[1448]: New session 6 of user core. Jan 29 12:42:08.496579 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:42:09.045651 sshd[1646]: pam_unix(sshd:session): session closed for user core Jan 29 12:42:09.056993 systemd[1]: sshd@3-172.24.4.118:22-172.24.4.1:50488.service: Deactivated successfully. Jan 29 12:42:09.060571 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:42:09.064678 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:42:09.072012 systemd[1]: Started sshd@4-172.24.4.118:22-172.24.4.1:50498.service - OpenSSH per-connection server daemon (172.24.4.1:50498). Jan 29 12:42:09.075482 systemd-logind[1448]: Removed session 6. Jan 29 12:42:10.508559 sshd[1653]: Accepted publickey for core from 172.24.4.1 port 50498 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:42:10.511391 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:42:10.521176 systemd-logind[1448]: New session 7 of user core. Jan 29 12:42:10.530607 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:42:11.246584 sshd[1653]: pam_unix(sshd:session): session closed for user core Jan 29 12:42:11.258123 systemd[1]: sshd@4-172.24.4.118:22-172.24.4.1:50498.service: Deactivated successfully. Jan 29 12:42:11.261660 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:42:11.263789 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:42:11.276901 systemd[1]: Started sshd@5-172.24.4.118:22-172.24.4.1:50506.service - OpenSSH per-connection server daemon (172.24.4.1:50506). Jan 29 12:42:11.280189 systemd-logind[1448]: Removed session 7. Jan 29 12:42:12.791914 sshd[1660]: Accepted publickey for core from 172.24.4.1 port 50506 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:42:12.818867 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:42:12.829365 systemd-logind[1448]: New session 8 of user core. Jan 29 12:42:12.839679 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:42:13.489606 sshd[1660]: pam_unix(sshd:session): session closed for user core Jan 29 12:42:13.506621 systemd[1]: sshd@5-172.24.4.118:22-172.24.4.1:50506.service: Deactivated successfully. Jan 29 12:42:13.508491 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:42:13.511689 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:42:13.519910 systemd[1]: Started sshd@6-172.24.4.118:22-172.24.4.1:50512.service - OpenSSH per-connection server daemon (172.24.4.1:50512). Jan 29 12:42:13.525043 systemd-logind[1448]: Removed session 8. Jan 29 12:42:14.401533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:42:14.408615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:14.736188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:14.740937 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:42:14.849442 kubelet[1676]: E0129 12:42:14.849316 1676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:42:14.853784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:42:14.854119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:42:15.132973 sshd[1667]: Accepted publickey for core from 172.24.4.1 port 50512 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:42:15.135693 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:42:15.145922 systemd-logind[1448]: New session 9 of user core. Jan 29 12:42:15.151650 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:42:15.581890 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:42:15.582600 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:42:15.601420 sudo[1686]: pam_unix(sudo:session): session closed for user root Jan 29 12:42:15.780089 sshd[1667]: pam_unix(sshd:session): session closed for user core Jan 29 12:42:15.794384 systemd[1]: sshd@6-172.24.4.118:22-172.24.4.1:50512.service: Deactivated successfully. Jan 29 12:42:15.798841 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:42:15.801914 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:42:15.807817 systemd[1]: Started sshd@7-172.24.4.118:22-172.24.4.1:35472.service - OpenSSH per-connection server daemon (172.24.4.1:35472). Jan 29 12:42:15.810586 systemd-logind[1448]: Removed session 9. Jan 29 12:42:17.245928 sshd[1691]: Accepted publickey for core from 172.24.4.1 port 35472 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:42:17.248901 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:42:17.260076 systemd-logind[1448]: New session 10 of user core. Jan 29 12:42:17.272686 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:42:17.809315 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:42:17.810350 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:42:17.817743 sudo[1695]: pam_unix(sudo:session): session closed for user root Jan 29 12:42:17.828823 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:42:17.829656 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:42:17.852783 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:42:17.870481 auditctl[1698]: No rules Jan 29 12:42:17.871359 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:42:17.871808 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:42:17.882140 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:42:17.938836 augenrules[1716]: No rules Jan 29 12:42:17.940534 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:42:17.942927 sudo[1694]: pam_unix(sudo:session): session closed for user root Jan 29 12:42:18.105528 sshd[1691]: pam_unix(sshd:session): session closed for user core Jan 29 12:42:18.115152 systemd[1]: sshd@7-172.24.4.118:22-172.24.4.1:35472.service: Deactivated successfully. Jan 29 12:42:18.117867 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:42:18.121564 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:42:18.127829 systemd[1]: Started sshd@8-172.24.4.118:22-172.24.4.1:35474.service - OpenSSH per-connection server daemon (172.24.4.1:35474). Jan 29 12:42:18.130982 systemd-logind[1448]: Removed session 10. Jan 29 12:42:19.432943 sshd[1724]: Accepted publickey for core from 172.24.4.1 port 35474 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:42:19.436141 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:42:19.447377 systemd-logind[1448]: New session 11 of user core. Jan 29 12:42:19.456579 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:42:19.813045 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:42:19.814004 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:42:20.459745 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:42:20.463325 (dockerd)[1743]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:42:21.095824 dockerd[1743]: time="2025-01-29T12:42:21.095701505Z" level=info msg="Starting up" Jan 29 12:42:21.408662 dockerd[1743]: time="2025-01-29T12:42:21.408566346Z" level=info msg="Loading containers: start." Jan 29 12:42:21.587501 kernel: Initializing XFRM netlink socket Jan 29 12:42:21.696092 systemd-networkd[1360]: docker0: Link UP Jan 29 12:42:21.716764 dockerd[1743]: time="2025-01-29T12:42:21.716590568Z" level=info msg="Loading containers: done." Jan 29 12:42:21.748549 dockerd[1743]: time="2025-01-29T12:42:21.748464809Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:42:21.748704 dockerd[1743]: time="2025-01-29T12:42:21.748633249Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:42:21.748838 dockerd[1743]: time="2025-01-29T12:42:21.748794325Z" level=info msg="Daemon has completed initialization" Jan 29 12:42:21.826352 dockerd[1743]: time="2025-01-29T12:42:21.826276229Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:42:21.826908 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:42:24.356372 containerd[1470]: time="2025-01-29T12:42:24.355762826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:42:24.901135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:42:24.910290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:25.061868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:25.083088 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:42:25.464355 kubelet[1894]: E0129 12:42:25.220626 1894 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:42:25.224064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:42:25.224431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:42:25.556676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241105147.mount: Deactivated successfully. Jan 29 12:42:27.664904 containerd[1470]: time="2025-01-29T12:42:27.664769318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:27.667316 containerd[1470]: time="2025-01-29T12:42:27.667106828Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 29 12:42:27.670543 containerd[1470]: time="2025-01-29T12:42:27.670442336Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:27.680455 containerd[1470]: time="2025-01-29T12:42:27.680354811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:27.685562 containerd[1470]: time="2025-01-29T12:42:27.684588908Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.328681548s" Jan 29 12:42:27.685562 containerd[1470]: time="2025-01-29T12:42:27.684676603Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:42:27.740953 containerd[1470]: time="2025-01-29T12:42:27.740872484Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:42:30.062496 containerd[1470]: time="2025-01-29T12:42:30.062401497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:30.064168 containerd[1470]: time="2025-01-29T12:42:30.063856565Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 29 12:42:30.065651 containerd[1470]: time="2025-01-29T12:42:30.065526207Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:30.070557 containerd[1470]: time="2025-01-29T12:42:30.069112320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:30.070557 containerd[1470]: time="2025-01-29T12:42:30.070430780Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.32949733s" Jan 29 12:42:30.070557 containerd[1470]: time="2025-01-29T12:42:30.070467088Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:42:30.092748 containerd[1470]: time="2025-01-29T12:42:30.092704550Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:42:31.888381 containerd[1470]: time="2025-01-29T12:42:31.888189120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:31.915748 containerd[1470]: time="2025-01-29T12:42:31.915643080Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 29 12:42:31.930561 containerd[1470]: time="2025-01-29T12:42:31.930436633Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:31.962919 containerd[1470]: time="2025-01-29T12:42:31.962815909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:31.967147 containerd[1470]: time="2025-01-29T12:42:31.966906600Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.874149533s" Jan 29 12:42:31.967147 containerd[1470]: time="2025-01-29T12:42:31.966981732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:42:32.017999 containerd[1470]: time="2025-01-29T12:42:32.017788359Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:42:33.498438 update_engine[1450]: I20250129 12:42:33.497948 1450 update_attempter.cc:509] Updating boot flags... Jan 29 12:42:33.624131 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1984) Jan 29 12:42:33.866280 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1984) Jan 29 12:42:33.995280 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1984) Jan 29 12:42:34.345241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037958208.mount: Deactivated successfully. Jan 29 12:42:35.067667 containerd[1470]: time="2025-01-29T12:42:35.067553594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:35.069721 containerd[1470]: time="2025-01-29T12:42:35.069646348Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 12:42:35.071141 containerd[1470]: time="2025-01-29T12:42:35.071076464Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:35.074375 containerd[1470]: time="2025-01-29T12:42:35.074212555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:35.075103 containerd[1470]: time="2025-01-29T12:42:35.074948132Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 3.057093057s" Jan 29 12:42:35.075103 containerd[1470]: time="2025-01-29T12:42:35.074982797Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:42:35.117851 containerd[1470]: time="2025-01-29T12:42:35.117807021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:42:35.401310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:42:35.409063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:35.591048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:35.601702 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:42:35.857887 kubelet[2013]: E0129 12:42:35.857632 2013 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:42:35.862060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:42:35.862687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:42:36.067800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1625353376.mount: Deactivated successfully. Jan 29 12:42:37.825277 containerd[1470]: time="2025-01-29T12:42:37.825067217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:37.830188 containerd[1470]: time="2025-01-29T12:42:37.830064110Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 12:42:37.835539 containerd[1470]: time="2025-01-29T12:42:37.835421722Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:37.841633 containerd[1470]: time="2025-01-29T12:42:37.841439006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:37.845807 containerd[1470]: time="2025-01-29T12:42:37.845525965Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.727668177s" Jan 29 12:42:37.845807 containerd[1470]: time="2025-01-29T12:42:37.845616265Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:42:37.902060 containerd[1470]: time="2025-01-29T12:42:37.901545216Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:42:38.800186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825456802.mount: Deactivated successfully. Jan 29 12:42:38.815786 containerd[1470]: time="2025-01-29T12:42:38.814397298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:38.815981 containerd[1470]: time="2025-01-29T12:42:38.815949902Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 29 12:42:38.817560 containerd[1470]: time="2025-01-29T12:42:38.817535979Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:38.821965 containerd[1470]: time="2025-01-29T12:42:38.821936115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:38.825415 containerd[1470]: time="2025-01-29T12:42:38.825322151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 923.673811ms" Jan 29 12:42:38.825679 containerd[1470]: time="2025-01-29T12:42:38.825428592Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:42:38.861049 containerd[1470]: time="2025-01-29T12:42:38.860984960Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:42:39.540282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875410313.mount: Deactivated successfully. Jan 29 12:42:43.191721 containerd[1470]: time="2025-01-29T12:42:43.191575731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:43.195361 containerd[1470]: time="2025-01-29T12:42:43.195264721Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 29 12:42:43.198268 containerd[1470]: time="2025-01-29T12:42:43.196948177Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:43.206599 containerd[1470]: time="2025-01-29T12:42:43.206534500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:42:43.210861 containerd[1470]: time="2025-01-29T12:42:43.210794723Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.349386147s" Jan 29 12:42:43.211078 containerd[1470]: time="2025-01-29T12:42:43.211032712Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:42:45.901695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 12:42:45.914362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:46.228443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:46.247546 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:42:46.308909 kubelet[2193]: E0129 12:42:46.308858 2193 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:42:46.311757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:42:46.311908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:42:48.302152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:48.319847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:48.367315 systemd[1]: Reloading requested from client PID 2208 ('systemctl') (unit session-11.scope)... Jan 29 12:42:48.367670 systemd[1]: Reloading... Jan 29 12:42:48.481364 zram_generator::config[2250]: No configuration found. Jan 29 12:42:48.801084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:42:48.886421 systemd[1]: Reloading finished in 517 ms. Jan 29 12:42:48.933248 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:42:48.933328 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:42:48.933816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:48.941425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:49.056392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:49.066543 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:42:49.139407 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:42:49.140395 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:42:49.140395 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:42:49.297916 kubelet[2311]: I0129 12:42:49.297797 2311 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:42:49.904932 kubelet[2311]: I0129 12:42:49.904853 2311 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:42:49.904932 kubelet[2311]: I0129 12:42:49.904890 2311 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:42:49.905204 kubelet[2311]: I0129 12:42:49.905131 2311 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:42:49.929310 kubelet[2311]: I0129 12:42:49.927016 2311 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:42:49.929980 kubelet[2311]: E0129 12:42:49.929936 2311 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:49.945109 kubelet[2311]: I0129 12:42:49.945063 2311 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:42:49.945878 kubelet[2311]: I0129 12:42:49.945813 2311 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:42:49.946518 kubelet[2311]: I0129 12:42:49.946011 2311 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-97e17aa81b.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:42:49.947425 kubelet[2311]: I0129 12:42:49.947377 2311 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:42:49.947425 kubelet[2311]: I0129 12:42:49.947408 2311 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:42:49.947577 kubelet[2311]: I0129 12:42:49.947567 2311 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:42:49.948765 kubelet[2311]: I0129 12:42:49.948736 2311 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:42:49.948765 kubelet[2311]: I0129 12:42:49.948759 2311 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:42:49.948917 kubelet[2311]: I0129 12:42:49.948780 2311 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:42:49.948917 kubelet[2311]: I0129 12:42:49.948797 2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:42:49.959644 kubelet[2311]: W0129 12:42:49.959128 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:49.959644 kubelet[2311]: E0129 12:42:49.959285 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:49.959644 kubelet[2311]: W0129 12:42:49.959434 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-97e17aa81b.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:49.959644 kubelet[2311]: E0129 12:42:49.959516 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-97e17aa81b.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:49.960893 kubelet[2311]: I0129 12:42:49.960849 2311 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:42:49.964743 kubelet[2311]: I0129 12:42:49.964700 2311 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:42:49.964865 kubelet[2311]: W0129 12:42:49.964776 2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:42:49.966878 kubelet[2311]: I0129 12:42:49.966791 2311 server.go:1264] "Started kubelet" Jan 29 12:42:49.971282 kubelet[2311]: I0129 12:42:49.970294 2311 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:42:49.972807 kubelet[2311]: I0129 12:42:49.972727 2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:42:49.973097 kubelet[2311]: I0129 12:42:49.973063 2311 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:42:49.974338 kubelet[2311]: I0129 12:42:49.974304 2311 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:42:49.974502 kubelet[2311]: E0129 12:42:49.974353 2311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.118:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.118:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-e-97e17aa81b.novalocal.181f2a5f4b970948 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-e-97e17aa81b.novalocal,UID:ci-4081-3-0-e-97e17aa81b.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-e-97e17aa81b.novalocal,},FirstTimestamp:2025-01-29 12:42:49.966750024 +0000 UTC m=+0.896134925,LastTimestamp:2025-01-29 12:42:49.966750024 +0000 UTC m=+0.896134925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-e-97e17aa81b.novalocal,}" Jan 29 12:42:49.979345 kubelet[2311]: I0129 12:42:49.979296 2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:42:49.984561 kubelet[2311]: I0129 12:42:49.984528 2311 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:42:49.988724 kubelet[2311]: W0129 12:42:49.988636 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:49.988946 kubelet[2311]: E0129 12:42:49.988916 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:49.989317 kubelet[2311]: E0129 12:42:49.989195 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-97e17aa81b.novalocal?timeout=10s\": dial tcp 172.24.4.118:6443: connect: connection refused" interval="200ms" Jan 29 12:42:49.990179 kubelet[2311]: I0129 12:42:49.984483 2311 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:42:49.990860 kubelet[2311]: E0129 12:42:49.990817 2311 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:42:49.991483 kubelet[2311]: I0129 12:42:49.991448 2311 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:42:49.991811 kubelet[2311]: I0129 12:42:49.991773 2311 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:42:49.993002 kubelet[2311]: I0129 12:42:49.992971 2311 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:42:49.994560 kubelet[2311]: I0129 12:42:49.994511 2311 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:42:50.007098 kubelet[2311]: I0129 12:42:50.007017 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:42:50.008413 kubelet[2311]: I0129 12:42:50.008340 2311 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:42:50.008413 kubelet[2311]: I0129 12:42:50.008374 2311 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:42:50.008413 kubelet[2311]: I0129 12:42:50.008395 2311 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:42:50.008727 kubelet[2311]: E0129 12:42:50.008433 2311 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:42:50.020352 kubelet[2311]: W0129 12:42:50.019656 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:50.020352 kubelet[2311]: E0129 12:42:50.019742 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:50.031907 kubelet[2311]: I0129 12:42:50.031870 2311 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:42:50.031907 kubelet[2311]: I0129 12:42:50.031889 2311 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:42:50.031907 kubelet[2311]: I0129 12:42:50.031905 2311 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:42:50.039351 kubelet[2311]: I0129 12:42:50.039306 2311 policy_none.go:49] "None policy: Start" Jan 29 12:42:50.039903 kubelet[2311]: I0129 12:42:50.039785 2311 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:42:50.039903 kubelet[2311]: I0129 12:42:50.039902 2311 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:42:50.045967 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:42:50.059828 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:42:50.075867 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:42:50.077500 kubelet[2311]: I0129 12:42:50.077482 2311 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:42:50.078443 kubelet[2311]: I0129 12:42:50.078044 2311 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:42:50.078443 kubelet[2311]: I0129 12:42:50.078165 2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:42:50.079805 kubelet[2311]: E0129 12:42:50.079765 2311 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-e-97e17aa81b.novalocal\" not found" Jan 29 12:42:50.088363 kubelet[2311]: I0129 12:42:50.088334 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.088906 kubelet[2311]: E0129 12:42:50.088863 2311 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.118:6443/api/v1/nodes\": dial tcp 172.24.4.118:6443: connect: connection refused" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.109309 kubelet[2311]: I0129 12:42:50.109267 2311 topology_manager.go:215] "Topology Admit Handler" podUID="cd608055ff86a6853b344a87261cc4ad" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.111026 kubelet[2311]: I0129 12:42:50.110951 2311 topology_manager.go:215] "Topology Admit Handler" podUID="331ceb211ee27c9fd60a867df8ef0707" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.113020 kubelet[2311]: I0129 12:42:50.112811 2311 topology_manager.go:215] "Topology Admit Handler" podUID="3fb277a2870bd4e500a5521cbb73b1ca" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.123447 systemd[1]: Created slice kubepods-burstable-podcd608055ff86a6853b344a87261cc4ad.slice - libcontainer container kubepods-burstable-podcd608055ff86a6853b344a87261cc4ad.slice. Jan 29 12:42:50.149772 systemd[1]: Created slice kubepods-burstable-pod331ceb211ee27c9fd60a867df8ef0707.slice - libcontainer container kubepods-burstable-pod331ceb211ee27c9fd60a867df8ef0707.slice. Jan 29 12:42:50.165724 systemd[1]: Created slice kubepods-burstable-pod3fb277a2870bd4e500a5521cbb73b1ca.slice - libcontainer container kubepods-burstable-pod3fb277a2870bd4e500a5521cbb73b1ca.slice. Jan 29 12:42:50.190297 kubelet[2311]: E0129 12:42:50.190157 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-97e17aa81b.novalocal?timeout=10s\": dial tcp 172.24.4.118:6443: connect: connection refused" interval="400ms" Jan 29 12:42:50.194780 kubelet[2311]: I0129 12:42:50.194694 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd608055ff86a6853b344a87261cc4ad-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"cd608055ff86a6853b344a87261cc4ad\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.194780 kubelet[2311]: I0129 12:42:50.194762 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd608055ff86a6853b344a87261cc4ad-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"cd608055ff86a6853b344a87261cc4ad\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.194961 kubelet[2311]: I0129 12:42:50.194815 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd608055ff86a6853b344a87261cc4ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"cd608055ff86a6853b344a87261cc4ad\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.194961 kubelet[2311]: I0129 12:42:50.194865 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.194961 kubelet[2311]: I0129 12:42:50.194904 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.194961 kubelet[2311]: I0129 12:42:50.194944 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.195205 kubelet[2311]: I0129 12:42:50.194985 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.195205 kubelet[2311]: I0129 12:42:50.195025 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.195205 kubelet[2311]: I0129 12:42:50.195087 2311 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fb277a2870bd4e500a5521cbb73b1ca-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"3fb277a2870bd4e500a5521cbb73b1ca\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.293685 kubelet[2311]: I0129 12:42:50.293614 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.294467 kubelet[2311]: E0129 12:42:50.294378 2311 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.118:6443/api/v1/nodes\": dial tcp 172.24.4.118:6443: connect: connection refused" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.447215 containerd[1470]: time="2025-01-29T12:42:50.446986458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal,Uid:cd608055ff86a6853b344a87261cc4ad,Namespace:kube-system,Attempt:0,}" Jan 29 12:42:50.460741 containerd[1470]: time="2025-01-29T12:42:50.460587966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal,Uid:331ceb211ee27c9fd60a867df8ef0707,Namespace:kube-system,Attempt:0,}" Jan 29 12:42:50.473336 containerd[1470]: time="2025-01-29T12:42:50.472848303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal,Uid:3fb277a2870bd4e500a5521cbb73b1ca,Namespace:kube-system,Attempt:0,}" Jan 29 12:42:50.591712 kubelet[2311]: E0129 12:42:50.591629 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-97e17aa81b.novalocal?timeout=10s\": dial tcp 172.24.4.118:6443: connect: connection refused" interval="800ms" Jan 29 12:42:50.699465 kubelet[2311]: I0129 12:42:50.698104 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.699465 kubelet[2311]: E0129 12:42:50.698869 2311 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.118:6443/api/v1/nodes\": dial tcp 172.24.4.118:6443: connect: connection refused" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:50.916034 kubelet[2311]: W0129 12:42:50.915895 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:50.916320 kubelet[2311]: E0129 12:42:50.916059 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:50.937914 kubelet[2311]: W0129 12:42:50.937763 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:50.937914 kubelet[2311]: E0129 12:42:50.937908 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.118:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:50.949166 kubelet[2311]: W0129 12:42:50.949049 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-97e17aa81b.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:50.949166 kubelet[2311]: E0129 12:42:50.949153 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-e-97e17aa81b.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:51.137563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418731106.mount: Deactivated successfully. Jan 29 12:42:51.151663 containerd[1470]: time="2025-01-29T12:42:51.151566973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:42:51.159320 containerd[1470]: time="2025-01-29T12:42:51.159082904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:42:51.161070 containerd[1470]: time="2025-01-29T12:42:51.160950132Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:42:51.163551 containerd[1470]: time="2025-01-29T12:42:51.163420223Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:42:51.165643 containerd[1470]: time="2025-01-29T12:42:51.165509529Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:42:51.167696 containerd[1470]: time="2025-01-29T12:42:51.167579347Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:42:51.168776 containerd[1470]: time="2025-01-29T12:42:51.168664436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:42:51.176012 containerd[1470]: time="2025-01-29T12:42:51.175932321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:42:51.180600 containerd[1470]: time="2025-01-29T12:42:51.180152680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.414362ms" Jan 29 12:42:51.188636 containerd[1470]: time="2025-01-29T12:42:51.188539097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 741.327476ms" Jan 29 12:42:51.191575 containerd[1470]: time="2025-01-29T12:42:51.191479361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.494901ms" Jan 29 12:42:51.395453 kubelet[2311]: E0129 12:42:51.393210 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-e-97e17aa81b.novalocal?timeout=10s\": dial tcp 172.24.4.118:6443: connect: connection refused" interval="1.6s" Jan 29 12:42:51.404473 containerd[1470]: time="2025-01-29T12:42:51.403773651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:42:51.404473 containerd[1470]: time="2025-01-29T12:42:51.403846938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:42:51.404473 containerd[1470]: time="2025-01-29T12:42:51.403866835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:42:51.405402 containerd[1470]: time="2025-01-29T12:42:51.405307242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:42:51.411757 containerd[1470]: time="2025-01-29T12:42:51.411627416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:42:51.411757 containerd[1470]: time="2025-01-29T12:42:51.411700123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:42:51.412033 containerd[1470]: time="2025-01-29T12:42:51.411728586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:42:51.412033 containerd[1470]: time="2025-01-29T12:42:51.411823726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:42:51.434864 containerd[1470]: time="2025-01-29T12:42:51.433836543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:42:51.434864 containerd[1470]: time="2025-01-29T12:42:51.433903569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:42:51.434864 containerd[1470]: time="2025-01-29T12:42:51.433923907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:42:51.434864 containerd[1470]: time="2025-01-29T12:42:51.434040726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:42:51.448529 systemd[1]: Started cri-containerd-fe8e6c031bef622187aa2d604648efdaf7689bec433ffc492496243efcf2c2d6.scope - libcontainer container fe8e6c031bef622187aa2d604648efdaf7689bec433ffc492496243efcf2c2d6. Jan 29 12:42:51.461467 systemd[1]: Started cri-containerd-35b027d17ec4df5890217ec917911b61d6c014bc9dad48f0527c818692b2e65a.scope - libcontainer container 35b027d17ec4df5890217ec917911b61d6c014bc9dad48f0527c818692b2e65a. Jan 29 12:42:51.467603 systemd[1]: Started cri-containerd-47bbcdadcba5e0159984dc756fcff006ed864e784e5e1838eb837f564bf2c8da.scope - libcontainer container 47bbcdadcba5e0159984dc756fcff006ed864e784e5e1838eb837f564bf2c8da. Jan 29 12:42:51.474303 kubelet[2311]: W0129 12:42:51.473988 2311 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:51.474303 kubelet[2311]: E0129 12:42:51.474026 2311 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.118:6443: connect: connection refused Jan 29 12:42:51.501779 kubelet[2311]: I0129 12:42:51.501731 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:51.502247 kubelet[2311]: E0129 12:42:51.502189 2311 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.118:6443/api/v1/nodes\": dial tcp 172.24.4.118:6443: connect: connection refused" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:51.535887 containerd[1470]: time="2025-01-29T12:42:51.535731303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal,Uid:3fb277a2870bd4e500a5521cbb73b1ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe8e6c031bef622187aa2d604648efdaf7689bec433ffc492496243efcf2c2d6\"" Jan 29 12:42:51.540989 containerd[1470]: time="2025-01-29T12:42:51.539294859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal,Uid:331ceb211ee27c9fd60a867df8ef0707,Namespace:kube-system,Attempt:0,} returns sandbox id \"35b027d17ec4df5890217ec917911b61d6c014bc9dad48f0527c818692b2e65a\"" Jan 29 12:42:51.545972 containerd[1470]: time="2025-01-29T12:42:51.545725100Z" level=info msg="CreateContainer within sandbox \"fe8e6c031bef622187aa2d604648efdaf7689bec433ffc492496243efcf2c2d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:42:51.547954 containerd[1470]: time="2025-01-29T12:42:51.547902942Z" level=info msg="CreateContainer within sandbox \"35b027d17ec4df5890217ec917911b61d6c014bc9dad48f0527c818692b2e65a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:42:51.551355 containerd[1470]: time="2025-01-29T12:42:51.551322276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal,Uid:cd608055ff86a6853b344a87261cc4ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"47bbcdadcba5e0159984dc756fcff006ed864e784e5e1838eb837f564bf2c8da\"" Jan 29 12:42:51.556985 containerd[1470]: time="2025-01-29T12:42:51.556945721Z" level=info msg="CreateContainer within sandbox \"47bbcdadcba5e0159984dc756fcff006ed864e784e5e1838eb837f564bf2c8da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:42:51.587010 containerd[1470]: time="2025-01-29T12:42:51.586920899Z" level=info msg="CreateContainer within sandbox \"35b027d17ec4df5890217ec917911b61d6c014bc9dad48f0527c818692b2e65a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e31a8f20d5543dd5e80259fc1062c4e6c96d4353cceccb782ca7d94c74ab832\"" Jan 29 12:42:51.588069 containerd[1470]: time="2025-01-29T12:42:51.588029882Z" level=info msg="StartContainer for \"6e31a8f20d5543dd5e80259fc1062c4e6c96d4353cceccb782ca7d94c74ab832\"" Jan 29 12:42:51.595815 containerd[1470]: time="2025-01-29T12:42:51.595755668Z" level=info msg="CreateContainer within sandbox \"47bbcdadcba5e0159984dc756fcff006ed864e784e5e1838eb837f564bf2c8da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03adda1d6ea1a3fa840a2d10f7ea752a490fce109c9685d353664db5391ed1b6\"" Jan 29 12:42:51.596976 containerd[1470]: time="2025-01-29T12:42:51.596952797Z" level=info msg="CreateContainer within sandbox \"fe8e6c031bef622187aa2d604648efdaf7689bec433ffc492496243efcf2c2d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b21cecc462aa47a6715d3181dbd43d6a0eed47178ede7906382230060b5f8689\"" Jan 29 12:42:51.597578 containerd[1470]: time="2025-01-29T12:42:51.597276275Z" level=info msg="StartContainer for \"03adda1d6ea1a3fa840a2d10f7ea752a490fce109c9685d353664db5391ed1b6\"" Jan 29 12:42:51.598855 containerd[1470]: time="2025-01-29T12:42:51.598835704Z" level=info msg="StartContainer for \"b21cecc462aa47a6715d3181dbd43d6a0eed47178ede7906382230060b5f8689\"" Jan 29 12:42:51.634627 systemd[1]: Started cri-containerd-6e31a8f20d5543dd5e80259fc1062c4e6c96d4353cceccb782ca7d94c74ab832.scope - libcontainer container 6e31a8f20d5543dd5e80259fc1062c4e6c96d4353cceccb782ca7d94c74ab832. Jan 29 12:42:51.642441 systemd[1]: Started cri-containerd-b21cecc462aa47a6715d3181dbd43d6a0eed47178ede7906382230060b5f8689.scope - libcontainer container b21cecc462aa47a6715d3181dbd43d6a0eed47178ede7906382230060b5f8689. Jan 29 12:42:51.651697 systemd[1]: Started cri-containerd-03adda1d6ea1a3fa840a2d10f7ea752a490fce109c9685d353664db5391ed1b6.scope - libcontainer container 03adda1d6ea1a3fa840a2d10f7ea752a490fce109c9685d353664db5391ed1b6. Jan 29 12:42:51.720265 containerd[1470]: time="2025-01-29T12:42:51.719381317Z" level=info msg="StartContainer for \"6e31a8f20d5543dd5e80259fc1062c4e6c96d4353cceccb782ca7d94c74ab832\" returns successfully" Jan 29 12:42:51.740273 containerd[1470]: time="2025-01-29T12:42:51.739185745Z" level=info msg="StartContainer for \"b21cecc462aa47a6715d3181dbd43d6a0eed47178ede7906382230060b5f8689\" returns successfully" Jan 29 12:42:51.747272 containerd[1470]: time="2025-01-29T12:42:51.746825179Z" level=info msg="StartContainer for \"03adda1d6ea1a3fa840a2d10f7ea752a490fce109c9685d353664db5391ed1b6\" returns successfully" Jan 29 12:42:53.105340 kubelet[2311]: I0129 12:42:53.105293 2311 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:53.809790 kubelet[2311]: E0129 12:42:53.809742 2311 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-e-97e17aa81b.novalocal\" not found" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:53.952689 kubelet[2311]: I0129 12:42:53.952327 2311 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:53.961349 kubelet[2311]: I0129 12:42:53.961283 2311 apiserver.go:52] "Watching apiserver" Jan 29 12:42:53.988555 kubelet[2311]: I0129 12:42:53.988501 2311 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:42:54.055421 kubelet[2311]: E0129 12:42:54.055083 2311 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:56.310644 systemd[1]: Reloading requested from client PID 2589 ('systemctl') (unit session-11.scope)... Jan 29 12:42:56.310686 systemd[1]: Reloading... Jan 29 12:42:56.434287 zram_generator::config[2628]: No configuration found. Jan 29 12:42:56.460160 kubelet[2311]: W0129 12:42:56.460097 2311 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:42:56.611486 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:42:56.748212 systemd[1]: Reloading finished in 436 ms. Jan 29 12:42:56.795162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:56.802282 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:42:56.802543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:56.802603 systemd[1]: kubelet.service: Consumed 1.270s CPU time, 115.7M memory peak, 0B memory swap peak. Jan 29 12:42:56.807464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:42:56.911908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:42:56.921740 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:42:57.357465 kubelet[2692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:42:57.358278 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:42:57.358278 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:42:57.358278 kubelet[2692]: I0129 12:42:57.358061 2692 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:42:57.362705 kubelet[2692]: I0129 12:42:57.362664 2692 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:42:57.362894 kubelet[2692]: I0129 12:42:57.362793 2692 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:42:57.363908 kubelet[2692]: I0129 12:42:57.363259 2692 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:42:57.364885 kubelet[2692]: I0129 12:42:57.364868 2692 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:42:57.366469 kubelet[2692]: I0129 12:42:57.366434 2692 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:42:57.373343 kubelet[2692]: I0129 12:42:57.373291 2692 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:42:57.373534 kubelet[2692]: I0129 12:42:57.373487 2692 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:42:57.373843 kubelet[2692]: I0129 12:42:57.373533 2692 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-e-97e17aa81b.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:42:57.373968 kubelet[2692]: I0129 12:42:57.373850 2692 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:42:57.373968 kubelet[2692]: I0129 12:42:57.373864 2692 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:42:57.373968 kubelet[2692]: I0129 12:42:57.373897 2692 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:42:57.374054 kubelet[2692]: I0129 12:42:57.373985 2692 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:42:57.374054 kubelet[2692]: I0129 12:42:57.374044 2692 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:42:57.374115 kubelet[2692]: I0129 12:42:57.374081 2692 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:42:57.374115 kubelet[2692]: I0129 12:42:57.374101 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:42:57.378527 kubelet[2692]: I0129 12:42:57.378493 2692 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:42:57.378681 kubelet[2692]: I0129 12:42:57.378658 2692 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:42:57.379142 kubelet[2692]: I0129 12:42:57.379119 2692 server.go:1264] "Started kubelet" Jan 29 12:42:57.387347 kubelet[2692]: I0129 12:42:57.386772 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:42:57.398576 kubelet[2692]: I0129 12:42:57.398316 2692 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:42:57.399380 kubelet[2692]: I0129 12:42:57.399356 2692 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:42:57.403973 kubelet[2692]: I0129 12:42:57.403919 2692 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:42:57.404131 kubelet[2692]: I0129 12:42:57.404109 2692 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:42:57.408658 kubelet[2692]: I0129 12:42:57.407583 2692 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:42:57.411134 kubelet[2692]: I0129 12:42:57.411107 2692 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:42:57.411337 kubelet[2692]: I0129 12:42:57.411318 2692 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:42:57.413860 kubelet[2692]: I0129 12:42:57.413839 2692 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:42:57.413946 kubelet[2692]: I0129 12:42:57.413937 2692 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:42:57.414114 kubelet[2692]: I0129 12:42:57.414094 2692 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:42:57.414876 kubelet[2692]: I0129 12:42:57.414810 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:42:57.416246 kubelet[2692]: I0129 12:42:57.416185 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:42:57.416311 kubelet[2692]: I0129 12:42:57.416279 2692 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:42:57.416311 kubelet[2692]: I0129 12:42:57.416308 2692 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:42:57.416382 kubelet[2692]: E0129 12:42:57.416348 2692 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:42:57.474001 sudo[2721]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 12:42:57.474329 sudo[2721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 12:42:57.500956 kubelet[2692]: I0129 12:42:57.500930 2692 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:42:57.501434 kubelet[2692]: I0129 12:42:57.501116 2692 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:42:57.501434 kubelet[2692]: I0129 12:42:57.501138 2692 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:42:57.501434 kubelet[2692]: I0129 12:42:57.501350 2692 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:42:57.501434 kubelet[2692]: I0129 12:42:57.501362 2692 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:42:57.501434 kubelet[2692]: I0129 12:42:57.501383 2692 policy_none.go:49] "None policy: Start" Jan 29 12:42:57.502257 kubelet[2692]: I0129 12:42:57.502032 2692 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:42:57.502257 kubelet[2692]: I0129 12:42:57.502051 2692 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:42:57.502257 kubelet[2692]: I0129 12:42:57.502209 2692 state_mem.go:75] "Updated machine memory state" Jan 29 12:42:57.506796 kubelet[2692]: I0129 12:42:57.506760 2692 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:42:57.508868 kubelet[2692]: I0129 12:42:57.508266 2692 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:42:57.510054 kubelet[2692]: I0129 12:42:57.509499 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:42:57.516049 kubelet[2692]: I0129 12:42:57.516025 2692 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.516494 kubelet[2692]: I0129 12:42:57.516460 2692 topology_manager.go:215] "Topology Admit Handler" podUID="cd608055ff86a6853b344a87261cc4ad" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.516650 kubelet[2692]: I0129 12:42:57.516618 2692 topology_manager.go:215] "Topology Admit Handler" podUID="331ceb211ee27c9fd60a867df8ef0707" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.516826 kubelet[2692]: I0129 12:42:57.516811 2692 topology_manager.go:215] "Topology Admit Handler" podUID="3fb277a2870bd4e500a5521cbb73b1ca" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.549716 kubelet[2692]: W0129 12:42:57.549676 2692 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:42:57.551503 kubelet[2692]: W0129 12:42:57.549976 2692 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:42:57.551503 kubelet[2692]: W0129 12:42:57.550302 2692 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:42:57.551774 kubelet[2692]: E0129 12:42:57.551657 2692 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.560485 kubelet[2692]: I0129 12:42:57.560448 2692 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.560926 kubelet[2692]: I0129 12:42:57.560776 2692 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713019 kubelet[2692]: I0129 12:42:57.712968 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713019 kubelet[2692]: I0129 12:42:57.713019 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713223 kubelet[2692]: I0129 12:42:57.713047 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713223 kubelet[2692]: I0129 12:42:57.713069 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd608055ff86a6853b344a87261cc4ad-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"cd608055ff86a6853b344a87261cc4ad\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713223 kubelet[2692]: I0129 12:42:57.713094 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713223 kubelet[2692]: I0129 12:42:57.713116 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/331ceb211ee27c9fd60a867df8ef0707-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"331ceb211ee27c9fd60a867df8ef0707\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713356 kubelet[2692]: I0129 12:42:57.713136 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd608055ff86a6853b344a87261cc4ad-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"cd608055ff86a6853b344a87261cc4ad\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713356 kubelet[2692]: I0129 12:42:57.713158 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd608055ff86a6853b344a87261cc4ad-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"cd608055ff86a6853b344a87261cc4ad\") " pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:57.713356 kubelet[2692]: I0129 12:42:57.713180 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fb277a2870bd4e500a5521cbb73b1ca-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal\" (UID: \"3fb277a2870bd4e500a5521cbb73b1ca\") " pod="kube-system/kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:58.136343 sudo[2721]: pam_unix(sudo:session): session closed for user root Jan 29 12:42:58.377715 kubelet[2692]: I0129 12:42:58.376494 2692 apiserver.go:52] "Watching apiserver" Jan 29 12:42:58.411306 kubelet[2692]: I0129 12:42:58.411248 2692 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:42:58.491967 kubelet[2692]: W0129 12:42:58.490360 2692 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:42:58.491967 kubelet[2692]: W0129 12:42:58.490416 2692 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:42:58.491967 kubelet[2692]: E0129 12:42:58.490470 2692 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:58.491967 kubelet[2692]: E0129 12:42:58.491145 2692 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" Jan 29 12:42:58.543275 kubelet[2692]: I0129 12:42:58.542407 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-e-97e17aa81b.novalocal" podStartSLOduration=2.542372586 podStartE2EDuration="2.542372586s" podCreationTimestamp="2025-01-29 12:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:42:58.486395055 +0000 UTC m=+1.559829365" watchObservedRunningTime="2025-01-29 12:42:58.542372586 +0000 UTC m=+1.615806906" Jan 29 12:42:58.752298 kubelet[2692]: I0129 12:42:58.752008 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-e-97e17aa81b.novalocal" podStartSLOduration=1.75188974 podStartE2EDuration="1.75188974s" podCreationTimestamp="2025-01-29 12:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:42:58.547016373 +0000 UTC m=+1.620450693" watchObservedRunningTime="2025-01-29 12:42:58.75188974 +0000 UTC m=+1.825324060" Jan 29 12:42:59.123879 kubelet[2692]: I0129 12:42:59.122426 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-e-97e17aa81b.novalocal" podStartSLOduration=2.122390314 podStartE2EDuration="2.122390314s" podCreationTimestamp="2025-01-29 12:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:42:58.754656643 +0000 UTC m=+1.828090963" watchObservedRunningTime="2025-01-29 12:42:59.122390314 +0000 UTC m=+2.195824634" Jan 29 12:43:01.134299 sudo[1727]: pam_unix(sudo:session): session closed for user root Jan 29 12:43:01.413941 sshd[1724]: pam_unix(sshd:session): session closed for user core Jan 29 12:43:01.421029 systemd[1]: sshd@8-172.24.4.118:22-172.24.4.1:35474.service: Deactivated successfully. Jan 29 12:43:01.423772 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:43:01.424272 systemd[1]: session-11.scope: Consumed 8.334s CPU time, 192.0M memory peak, 0B memory swap peak. Jan 29 12:43:01.425546 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:43:01.427397 systemd-logind[1448]: Removed session 11. Jan 29 12:43:11.280863 kubelet[2692]: I0129 12:43:11.280348 2692 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:43:11.281884 containerd[1470]: time="2025-01-29T12:43:11.280774826Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:43:11.282808 kubelet[2692]: I0129 12:43:11.282541 2692 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:43:11.800787 kubelet[2692]: I0129 12:43:11.799884 2692 topology_manager.go:215] "Topology Admit Handler" podUID="1562ca40-a8cb-4a9d-8200-2ad83326cd65" podNamespace="kube-system" podName="kube-proxy-6l2qh" Jan 29 12:43:11.824378 kubelet[2692]: W0129 12:43:11.821435 2692 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-e-97e17aa81b.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-e-97e17aa81b.novalocal' and this object Jan 29 12:43:11.824378 kubelet[2692]: E0129 12:43:11.821512 2692 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-e-97e17aa81b.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-e-97e17aa81b.novalocal' and this object Jan 29 12:43:11.824378 kubelet[2692]: W0129 12:43:11.821635 2692 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-0-e-97e17aa81b.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-e-97e17aa81b.novalocal' and this object Jan 29 12:43:11.824378 kubelet[2692]: E0129 12:43:11.821676 2692 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-0-e-97e17aa81b.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-e-97e17aa81b.novalocal' and this object Jan 29 12:43:11.830214 systemd[1]: Created slice kubepods-besteffort-pod1562ca40_a8cb_4a9d_8200_2ad83326cd65.slice - libcontainer container kubepods-besteffort-pod1562ca40_a8cb_4a9d_8200_2ad83326cd65.slice. Jan 29 12:43:11.860376 kubelet[2692]: I0129 12:43:11.860325 2692 topology_manager.go:215] "Topology Admit Handler" podUID="9cdbe399-d451-48b6-b761-0a0a74024888" podNamespace="kube-system" podName="cilium-v9rg2" Jan 29 12:43:11.871086 systemd[1]: Created slice kubepods-burstable-pod9cdbe399_d451_48b6_b761_0a0a74024888.slice - libcontainer container kubepods-burstable-pod9cdbe399_d451_48b6_b761_0a0a74024888.slice. Jan 29 12:43:11.878917 kubelet[2692]: W0129 12:43:11.878804 2692 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-e-97e17aa81b.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-e-97e17aa81b.novalocal' and this object Jan 29 12:43:11.879850 kubelet[2692]: E0129 12:43:11.879807 2692 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-e-97e17aa81b.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-e-97e17aa81b.novalocal' and this object Jan 29 12:43:11.906058 kubelet[2692]: I0129 12:43:11.906021 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1562ca40-a8cb-4a9d-8200-2ad83326cd65-kube-proxy\") pod \"kube-proxy-6l2qh\" (UID: \"1562ca40-a8cb-4a9d-8200-2ad83326cd65\") " pod="kube-system/kube-proxy-6l2qh" Jan 29 12:43:11.906294 kubelet[2692]: I0129 12:43:11.906277 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1562ca40-a8cb-4a9d-8200-2ad83326cd65-xtables-lock\") pod \"kube-proxy-6l2qh\" (UID: \"1562ca40-a8cb-4a9d-8200-2ad83326cd65\") " pod="kube-system/kube-proxy-6l2qh" Jan 29 12:43:11.906549 kubelet[2692]: I0129 12:43:11.906387 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8s5\" (UniqueName: \"kubernetes.io/projected/1562ca40-a8cb-4a9d-8200-2ad83326cd65-kube-api-access-mb8s5\") pod \"kube-proxy-6l2qh\" (UID: \"1562ca40-a8cb-4a9d-8200-2ad83326cd65\") " pod="kube-system/kube-proxy-6l2qh" Jan 29 12:43:11.906549 kubelet[2692]: I0129 12:43:11.906416 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-hostproc\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.906549 kubelet[2692]: I0129 12:43:11.906439 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzdfn\" (UniqueName: \"kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-kube-api-access-tzdfn\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.906549 kubelet[2692]: I0129 12:43:11.906459 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-bpf-maps\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907424 kubelet[2692]: I0129 12:43:11.906493 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-xtables-lock\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907484 kubelet[2692]: I0129 12:43:11.907444 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cni-path\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907484 kubelet[2692]: I0129 12:43:11.907468 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-host-proc-sys-kernel\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907545 kubelet[2692]: I0129 12:43:11.907487 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-hubble-tls\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907545 kubelet[2692]: I0129 12:43:11.907505 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-cgroup\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907545 kubelet[2692]: I0129 12:43:11.907522 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-lib-modules\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907545 kubelet[2692]: I0129 12:43:11.907539 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-host-proc-sys-net\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907653 kubelet[2692]: I0129 12:43:11.907559 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-etc-cni-netd\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907653 kubelet[2692]: I0129 12:43:11.907579 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cdbe399-d451-48b6-b761-0a0a74024888-clustermesh-secrets\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907653 kubelet[2692]: I0129 12:43:11.907596 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1562ca40-a8cb-4a9d-8200-2ad83326cd65-lib-modules\") pod \"kube-proxy-6l2qh\" (UID: \"1562ca40-a8cb-4a9d-8200-2ad83326cd65\") " pod="kube-system/kube-proxy-6l2qh" Jan 29 12:43:11.907653 kubelet[2692]: I0129 12:43:11.907615 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-run\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:11.907653 kubelet[2692]: I0129 12:43:11.907636 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-config-path\") pod \"cilium-v9rg2\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " pod="kube-system/cilium-v9rg2" Jan 29 12:43:12.251018 kubelet[2692]: I0129 12:43:12.250849 2692 topology_manager.go:215] "Topology Admit Handler" podUID="c2b8f2e1-2854-4f82-ad46-821e44d64f8b" podNamespace="kube-system" podName="cilium-operator-599987898-rggcd" Jan 29 12:43:12.276343 systemd[1]: Created slice kubepods-besteffort-podc2b8f2e1_2854_4f82_ad46_821e44d64f8b.slice - libcontainer container kubepods-besteffort-podc2b8f2e1_2854_4f82_ad46_821e44d64f8b.slice. Jan 29 12:43:12.311707 kubelet[2692]: I0129 12:43:12.311650 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2b8f2e1-2854-4f82-ad46-821e44d64f8b-cilium-config-path\") pod \"cilium-operator-599987898-rggcd\" (UID: \"c2b8f2e1-2854-4f82-ad46-821e44d64f8b\") " pod="kube-system/cilium-operator-599987898-rggcd" Jan 29 12:43:12.312195 kubelet[2692]: I0129 12:43:12.311831 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd4pk\" (UniqueName: \"kubernetes.io/projected/c2b8f2e1-2854-4f82-ad46-821e44d64f8b-kube-api-access-nd4pk\") pod \"cilium-operator-599987898-rggcd\" (UID: \"c2b8f2e1-2854-4f82-ad46-821e44d64f8b\") " pod="kube-system/cilium-operator-599987898-rggcd" Jan 29 12:43:13.009975 kubelet[2692]: E0129 12:43:13.009919 2692 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 29 12:43:13.009975 kubelet[2692]: E0129 12:43:13.009962 2692 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-v9rg2: failed to sync secret cache: timed out waiting for the condition Jan 29 12:43:13.010473 kubelet[2692]: E0129 12:43:13.010173 2692 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-hubble-tls podName:9cdbe399-d451-48b6-b761-0a0a74024888 nodeName:}" failed. No retries permitted until 2025-01-29 12:43:13.510092001 +0000 UTC m=+16.583526311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-hubble-tls") pod "cilium-v9rg2" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:43:13.041953 containerd[1470]: time="2025-01-29T12:43:13.041835616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6l2qh,Uid:1562ca40-a8cb-4a9d-8200-2ad83326cd65,Namespace:kube-system,Attempt:0,}" Jan 29 12:43:13.108112 containerd[1470]: time="2025-01-29T12:43:13.107944368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:43:13.108899 containerd[1470]: time="2025-01-29T12:43:13.108728167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:43:13.109144 containerd[1470]: time="2025-01-29T12:43:13.108911207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:13.109624 containerd[1470]: time="2025-01-29T12:43:13.109537825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:13.145291 systemd[1]: Started cri-containerd-2a80af6097fb26e8c38bcf2b9a33b14809d6b50abbaf0fc991532bd983c82738.scope - libcontainer container 2a80af6097fb26e8c38bcf2b9a33b14809d6b50abbaf0fc991532bd983c82738. Jan 29 12:43:13.168032 containerd[1470]: time="2025-01-29T12:43:13.167991076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6l2qh,Uid:1562ca40-a8cb-4a9d-8200-2ad83326cd65,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a80af6097fb26e8c38bcf2b9a33b14809d6b50abbaf0fc991532bd983c82738\"" Jan 29 12:43:13.172966 containerd[1470]: time="2025-01-29T12:43:13.172890807Z" level=info msg="CreateContainer within sandbox \"2a80af6097fb26e8c38bcf2b9a33b14809d6b50abbaf0fc991532bd983c82738\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:43:13.180751 containerd[1470]: time="2025-01-29T12:43:13.180716595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rggcd,Uid:c2b8f2e1-2854-4f82-ad46-821e44d64f8b,Namespace:kube-system,Attempt:0,}" Jan 29 12:43:13.205765 containerd[1470]: time="2025-01-29T12:43:13.205636869Z" level=info msg="CreateContainer within sandbox \"2a80af6097fb26e8c38bcf2b9a33b14809d6b50abbaf0fc991532bd983c82738\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fafdb3ddf3f16033eda87f0376676153ad720935427890187b31a767753d0163\"" Jan 29 12:43:13.209408 containerd[1470]: time="2025-01-29T12:43:13.209349650Z" level=info msg="StartContainer for \"fafdb3ddf3f16033eda87f0376676153ad720935427890187b31a767753d0163\"" Jan 29 12:43:13.235977 containerd[1470]: time="2025-01-29T12:43:13.235873401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:43:13.236435 containerd[1470]: time="2025-01-29T12:43:13.236019961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:43:13.236435 containerd[1470]: time="2025-01-29T12:43:13.236121746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:13.236956 containerd[1470]: time="2025-01-29T12:43:13.236794462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:13.255450 systemd[1]: Started cri-containerd-fafdb3ddf3f16033eda87f0376676153ad720935427890187b31a767753d0163.scope - libcontainer container fafdb3ddf3f16033eda87f0376676153ad720935427890187b31a767753d0163. Jan 29 12:43:13.269430 systemd[1]: Started cri-containerd-dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3.scope - libcontainer container dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3. Jan 29 12:43:13.303744 containerd[1470]: time="2025-01-29T12:43:13.303609917Z" level=info msg="StartContainer for \"fafdb3ddf3f16033eda87f0376676153ad720935427890187b31a767753d0163\" returns successfully" Jan 29 12:43:13.326019 containerd[1470]: time="2025-01-29T12:43:13.325379845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rggcd,Uid:c2b8f2e1-2854-4f82-ad46-821e44d64f8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\"" Jan 29 12:43:13.327891 containerd[1470]: time="2025-01-29T12:43:13.327547581Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 12:43:13.678502 containerd[1470]: time="2025-01-29T12:43:13.677599757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v9rg2,Uid:9cdbe399-d451-48b6-b761-0a0a74024888,Namespace:kube-system,Attempt:0,}" Jan 29 12:43:13.725325 containerd[1470]: time="2025-01-29T12:43:13.724638345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:43:13.725505 containerd[1470]: time="2025-01-29T12:43:13.725356108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:43:13.725505 containerd[1470]: time="2025-01-29T12:43:13.725421804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:13.725645 containerd[1470]: time="2025-01-29T12:43:13.725595426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:13.752424 systemd[1]: Started cri-containerd-f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870.scope - libcontainer container f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870. Jan 29 12:43:13.774390 containerd[1470]: time="2025-01-29T12:43:13.774336481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v9rg2,Uid:9cdbe399-d451-48b6-b761-0a0a74024888,Namespace:kube-system,Attempt:0,} returns sandbox id \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\"" Jan 29 12:43:15.118626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069289739.mount: Deactivated successfully. Jan 29 12:43:16.102718 containerd[1470]: time="2025-01-29T12:43:16.102638315Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:43:16.104153 containerd[1470]: time="2025-01-29T12:43:16.103954429Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 12:43:16.105425 containerd[1470]: time="2025-01-29T12:43:16.105391764Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:43:16.107528 containerd[1470]: time="2025-01-29T12:43:16.107039251Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.779443468s" Jan 29 12:43:16.107528 containerd[1470]: time="2025-01-29T12:43:16.107078776Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 12:43:16.108739 containerd[1470]: time="2025-01-29T12:43:16.108697588Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 12:43:16.111745 containerd[1470]: time="2025-01-29T12:43:16.111711715Z" level=info msg="CreateContainer within sandbox \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 12:43:16.134808 containerd[1470]: time="2025-01-29T12:43:16.134692927Z" level=info msg="CreateContainer within sandbox \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\"" Jan 29 12:43:16.136256 containerd[1470]: time="2025-01-29T12:43:16.135540425Z" level=info msg="StartContainer for \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\"" Jan 29 12:43:16.172377 systemd[1]: Started cri-containerd-7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628.scope - libcontainer container 7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628. Jan 29 12:43:16.208105 containerd[1470]: time="2025-01-29T12:43:16.207992143Z" level=info msg="StartContainer for \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\" returns successfully" Jan 29 12:43:16.522963 kubelet[2692]: I0129 12:43:16.522906 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6l2qh" podStartSLOduration=5.522887646 podStartE2EDuration="5.522887646s" podCreationTimestamp="2025-01-29 12:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:43:13.517296504 +0000 UTC m=+16.590730824" watchObservedRunningTime="2025-01-29 12:43:16.522887646 +0000 UTC m=+19.596321916" Jan 29 12:43:16.523384 kubelet[2692]: I0129 12:43:16.523134 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-rggcd" podStartSLOduration=1.742054783 podStartE2EDuration="4.523127234s" podCreationTimestamp="2025-01-29 12:43:12 +0000 UTC" firstStartedPulling="2025-01-29 12:43:13.32692462 +0000 UTC m=+16.400358900" lastFinishedPulling="2025-01-29 12:43:16.10799707 +0000 UTC m=+19.181431351" observedRunningTime="2025-01-29 12:43:16.522449589 +0000 UTC m=+19.595883879" watchObservedRunningTime="2025-01-29 12:43:16.523127234 +0000 UTC m=+19.596561504" Jan 29 12:43:36.961322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470643300.mount: Deactivated successfully. Jan 29 12:43:39.539496 containerd[1470]: time="2025-01-29T12:43:39.538959140Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:43:39.541334 containerd[1470]: time="2025-01-29T12:43:39.541094729Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 12:43:39.545864 containerd[1470]: time="2025-01-29T12:43:39.545803626Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:43:39.547674 containerd[1470]: time="2025-01-29T12:43:39.547560206Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 23.438823393s" Jan 29 12:43:39.547674 containerd[1470]: time="2025-01-29T12:43:39.547592898Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 12:43:39.550379 containerd[1470]: time="2025-01-29T12:43:39.550267108Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:43:39.580624 containerd[1470]: time="2025-01-29T12:43:39.580562711Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\"" Jan 29 12:43:39.582255 containerd[1470]: time="2025-01-29T12:43:39.581382916Z" level=info msg="StartContainer for \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\"" Jan 29 12:43:39.623549 systemd[1]: Started cri-containerd-70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d.scope - libcontainer container 70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d. Jan 29 12:43:39.683068 systemd[1]: cri-containerd-70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d.scope: Deactivated successfully. Jan 29 12:43:39.687683 containerd[1470]: time="2025-01-29T12:43:39.687419445Z" level=info msg="StartContainer for \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\" returns successfully" Jan 29 12:43:40.568829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d-rootfs.mount: Deactivated successfully. Jan 29 12:43:40.951077 containerd[1470]: time="2025-01-29T12:43:40.950911016Z" level=info msg="shim disconnected" id=70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d namespace=k8s.io Jan 29 12:43:40.951077 containerd[1470]: time="2025-01-29T12:43:40.951025012Z" level=warning msg="cleaning up after shim disconnected" id=70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d namespace=k8s.io Jan 29 12:43:40.951077 containerd[1470]: time="2025-01-29T12:43:40.951051010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:43:41.611019 containerd[1470]: time="2025-01-29T12:43:41.610916109Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:43:41.660382 containerd[1470]: time="2025-01-29T12:43:41.660065535Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\"" Jan 29 12:43:41.663642 containerd[1470]: time="2025-01-29T12:43:41.662977933Z" level=info msg="StartContainer for \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\"" Jan 29 12:43:41.710380 systemd[1]: Started cri-containerd-fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706.scope - libcontainer container fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706. Jan 29 12:43:41.758809 containerd[1470]: time="2025-01-29T12:43:41.758771267Z" level=info msg="StartContainer for \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\" returns successfully" Jan 29 12:43:41.765060 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:43:41.765348 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:43:41.765411 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:43:41.774048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:43:41.774344 systemd[1]: cri-containerd-fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706.scope: Deactivated successfully. Jan 29 12:43:41.795224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706-rootfs.mount: Deactivated successfully. Jan 29 12:43:41.796319 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:43:41.818655 containerd[1470]: time="2025-01-29T12:43:41.818581093Z" level=info msg="shim disconnected" id=fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706 namespace=k8s.io Jan 29 12:43:41.818655 containerd[1470]: time="2025-01-29T12:43:41.818637530Z" level=warning msg="cleaning up after shim disconnected" id=fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706 namespace=k8s.io Jan 29 12:43:41.818655 containerd[1470]: time="2025-01-29T12:43:41.818648380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:43:42.623505 containerd[1470]: time="2025-01-29T12:43:42.623412595Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:43:42.682801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617646976.mount: Deactivated successfully. Jan 29 12:43:42.689140 containerd[1470]: time="2025-01-29T12:43:42.689099783Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\"" Jan 29 12:43:42.690417 containerd[1470]: time="2025-01-29T12:43:42.690379108Z" level=info msg="StartContainer for \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\"" Jan 29 12:43:42.747437 systemd[1]: Started cri-containerd-f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f.scope - libcontainer container f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f. Jan 29 12:43:42.786060 systemd[1]: cri-containerd-f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f.scope: Deactivated successfully. Jan 29 12:43:42.789573 containerd[1470]: time="2025-01-29T12:43:42.788989801Z" level=info msg="StartContainer for \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\" returns successfully" Jan 29 12:43:42.811914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f-rootfs.mount: Deactivated successfully. Jan 29 12:43:42.831681 containerd[1470]: time="2025-01-29T12:43:42.831458040Z" level=info msg="shim disconnected" id=f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f namespace=k8s.io Jan 29 12:43:42.831681 containerd[1470]: time="2025-01-29T12:43:42.831516501Z" level=warning msg="cleaning up after shim disconnected" id=f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f namespace=k8s.io Jan 29 12:43:42.831681 containerd[1470]: time="2025-01-29T12:43:42.831526469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:43:43.622695 containerd[1470]: time="2025-01-29T12:43:43.622627345Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:43:43.651158 containerd[1470]: time="2025-01-29T12:43:43.650997353Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\"" Jan 29 12:43:43.652343 containerd[1470]: time="2025-01-29T12:43:43.651929809Z" level=info msg="StartContainer for \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\"" Jan 29 12:43:43.689532 systemd[1]: run-containerd-runc-k8s.io-3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd-runc.NUKPxv.mount: Deactivated successfully. Jan 29 12:43:43.700453 systemd[1]: Started cri-containerd-3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd.scope - libcontainer container 3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd. Jan 29 12:43:43.724662 systemd[1]: cri-containerd-3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd.scope: Deactivated successfully. Jan 29 12:43:43.728928 containerd[1470]: time="2025-01-29T12:43:43.728715846Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cdbe399_d451_48b6_b761_0a0a74024888.slice/cri-containerd-3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd.scope/memory.events\": no such file or directory" Jan 29 12:43:43.738105 containerd[1470]: time="2025-01-29T12:43:43.737788882Z" level=info msg="StartContainer for \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\" returns successfully" Jan 29 12:43:43.760891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd-rootfs.mount: Deactivated successfully. Jan 29 12:43:43.795159 containerd[1470]: time="2025-01-29T12:43:43.795020920Z" level=info msg="shim disconnected" id=3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd namespace=k8s.io Jan 29 12:43:43.795683 containerd[1470]: time="2025-01-29T12:43:43.795125618Z" level=warning msg="cleaning up after shim disconnected" id=3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd namespace=k8s.io Jan 29 12:43:43.795683 containerd[1470]: time="2025-01-29T12:43:43.795396772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:43:44.636226 containerd[1470]: time="2025-01-29T12:43:44.636095822Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:43:44.701540 containerd[1470]: time="2025-01-29T12:43:44.701450021Z" level=info msg="CreateContainer within sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\"" Jan 29 12:43:44.704337 containerd[1470]: time="2025-01-29T12:43:44.703634709Z" level=info msg="StartContainer for \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\"" Jan 29 12:43:44.753372 systemd[1]: Started cri-containerd-aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5.scope - libcontainer container aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5. Jan 29 12:43:44.785132 containerd[1470]: time="2025-01-29T12:43:44.785082686Z" level=info msg="StartContainer for \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\" returns successfully" Jan 29 12:43:44.898765 kubelet[2692]: I0129 12:43:44.898673 2692 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:43:44.931667 kubelet[2692]: I0129 12:43:44.930715 2692 topology_manager.go:215] "Topology Admit Handler" podUID="30533876-d709-4ccb-9b68-863c7b394c0f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bplk2" Jan 29 12:43:44.939281 kubelet[2692]: I0129 12:43:44.939050 2692 topology_manager.go:215] "Topology Admit Handler" podUID="e4378013-2ea0-4d40-bc3c-27390dd44474" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zsc8t" Jan 29 12:43:44.940360 systemd[1]: Created slice kubepods-burstable-pod30533876_d709_4ccb_9b68_863c7b394c0f.slice - libcontainer container kubepods-burstable-pod30533876_d709_4ccb_9b68_863c7b394c0f.slice. Jan 29 12:43:44.942057 kubelet[2692]: I0129 12:43:44.941160 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30533876-d709-4ccb-9b68-863c7b394c0f-config-volume\") pod \"coredns-7db6d8ff4d-bplk2\" (UID: \"30533876-d709-4ccb-9b68-863c7b394c0f\") " pod="kube-system/coredns-7db6d8ff4d-bplk2" Jan 29 12:43:44.942057 kubelet[2692]: I0129 12:43:44.941204 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmh49\" (UniqueName: \"kubernetes.io/projected/30533876-d709-4ccb-9b68-863c7b394c0f-kube-api-access-tmh49\") pod \"coredns-7db6d8ff4d-bplk2\" (UID: \"30533876-d709-4ccb-9b68-863c7b394c0f\") " pod="kube-system/coredns-7db6d8ff4d-bplk2" Jan 29 12:43:44.953079 systemd[1]: Created slice kubepods-burstable-pode4378013_2ea0_4d40_bc3c_27390dd44474.slice - libcontainer container kubepods-burstable-pode4378013_2ea0_4d40_bc3c_27390dd44474.slice. Jan 29 12:43:45.142471 kubelet[2692]: I0129 12:43:45.142270 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4378013-2ea0-4d40-bc3c-27390dd44474-config-volume\") pod \"coredns-7db6d8ff4d-zsc8t\" (UID: \"e4378013-2ea0-4d40-bc3c-27390dd44474\") " pod="kube-system/coredns-7db6d8ff4d-zsc8t" Jan 29 12:43:45.142471 kubelet[2692]: I0129 12:43:45.142322 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsnkt\" (UniqueName: \"kubernetes.io/projected/e4378013-2ea0-4d40-bc3c-27390dd44474-kube-api-access-rsnkt\") pod \"coredns-7db6d8ff4d-zsc8t\" (UID: \"e4378013-2ea0-4d40-bc3c-27390dd44474\") " pod="kube-system/coredns-7db6d8ff4d-zsc8t" Jan 29 12:43:45.246554 containerd[1470]: time="2025-01-29T12:43:45.246245231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bplk2,Uid:30533876-d709-4ccb-9b68-863c7b394c0f,Namespace:kube-system,Attempt:0,}" Jan 29 12:43:45.258147 containerd[1470]: time="2025-01-29T12:43:45.257830478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsc8t,Uid:e4378013-2ea0-4d40-bc3c-27390dd44474,Namespace:kube-system,Attempt:0,}" Jan 29 12:43:45.677457 kubelet[2692]: I0129 12:43:45.677349 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v9rg2" podStartSLOduration=8.905046855 podStartE2EDuration="34.677316472s" podCreationTimestamp="2025-01-29 12:43:11 +0000 UTC" firstStartedPulling="2025-01-29 12:43:13.776143266 +0000 UTC m=+16.849577546" lastFinishedPulling="2025-01-29 12:43:39.548412893 +0000 UTC m=+42.621847163" observedRunningTime="2025-01-29 12:43:45.675047316 +0000 UTC m=+48.748481696" watchObservedRunningTime="2025-01-29 12:43:45.677316472 +0000 UTC m=+48.750750793" Jan 29 12:43:45.703068 systemd[1]: run-containerd-runc-k8s.io-aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5-runc.v1yfM3.mount: Deactivated successfully. Jan 29 12:43:46.939050 systemd-networkd[1360]: cilium_host: Link UP Jan 29 12:43:46.939208 systemd-networkd[1360]: cilium_net: Link UP Jan 29 12:43:46.939488 systemd-networkd[1360]: cilium_net: Gained carrier Jan 29 12:43:46.939737 systemd-networkd[1360]: cilium_host: Gained carrier Jan 29 12:43:47.042896 systemd-networkd[1360]: cilium_vxlan: Link UP Jan 29 12:43:47.042903 systemd-networkd[1360]: cilium_vxlan: Gained carrier Jan 29 12:43:47.310452 kernel: NET: Registered PF_ALG protocol family Jan 29 12:43:47.582555 systemd-networkd[1360]: cilium_net: Gained IPv6LL Jan 29 12:43:47.645487 systemd-networkd[1360]: cilium_host: Gained IPv6LL Jan 29 12:43:48.072125 systemd-networkd[1360]: lxc_health: Link UP Jan 29 12:43:48.078924 systemd-networkd[1360]: lxc_health: Gained carrier Jan 29 12:43:48.093309 systemd-networkd[1360]: cilium_vxlan: Gained IPv6LL Jan 29 12:43:48.314781 systemd-networkd[1360]: lxcae096e9a6ae1: Link UP Jan 29 12:43:48.326308 kernel: eth0: renamed from tmpff644 Jan 29 12:43:48.341712 systemd-networkd[1360]: lxcae096e9a6ae1: Gained carrier Jan 29 12:43:48.376791 kernel: eth0: renamed from tmpd0387 Jan 29 12:43:48.371798 systemd-networkd[1360]: lxc019fd1f5a0fc: Link UP Jan 29 12:43:48.386408 systemd-networkd[1360]: lxc019fd1f5a0fc: Gained carrier Jan 29 12:43:49.757426 systemd-networkd[1360]: lxc_health: Gained IPv6LL Jan 29 12:43:49.949483 systemd-networkd[1360]: lxc019fd1f5a0fc: Gained IPv6LL Jan 29 12:43:50.333409 systemd-networkd[1360]: lxcae096e9a6ae1: Gained IPv6LL Jan 29 12:43:52.741637 containerd[1470]: time="2025-01-29T12:43:52.741437483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:43:52.741637 containerd[1470]: time="2025-01-29T12:43:52.741509829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:43:52.741637 containerd[1470]: time="2025-01-29T12:43:52.741530198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:52.742489 containerd[1470]: time="2025-01-29T12:43:52.741612904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:52.773046 systemd[1]: Started cri-containerd-ff644ec93f6c99fe22ad515f218a2da047adf9d3de4885a58bfd32acf37e7d19.scope - libcontainer container ff644ec93f6c99fe22ad515f218a2da047adf9d3de4885a58bfd32acf37e7d19. Jan 29 12:43:52.798276 containerd[1470]: time="2025-01-29T12:43:52.797766588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:43:52.798276 containerd[1470]: time="2025-01-29T12:43:52.797904810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:43:52.798276 containerd[1470]: time="2025-01-29T12:43:52.797959703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:52.798276 containerd[1470]: time="2025-01-29T12:43:52.798163819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:43:52.832422 systemd[1]: Started cri-containerd-d0387d2e9f87b744c802236224146814c1ba0b3eac6500edb26a98fb83c879b6.scope - libcontainer container d0387d2e9f87b744c802236224146814c1ba0b3eac6500edb26a98fb83c879b6. Jan 29 12:43:52.852478 containerd[1470]: time="2025-01-29T12:43:52.852368056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bplk2,Uid:30533876-d709-4ccb-9b68-863c7b394c0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff644ec93f6c99fe22ad515f218a2da047adf9d3de4885a58bfd32acf37e7d19\"" Jan 29 12:43:52.856742 containerd[1470]: time="2025-01-29T12:43:52.856695992Z" level=info msg="CreateContainer within sandbox \"ff644ec93f6c99fe22ad515f218a2da047adf9d3de4885a58bfd32acf37e7d19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:43:52.888669 containerd[1470]: time="2025-01-29T12:43:52.888098042Z" level=info msg="CreateContainer within sandbox \"ff644ec93f6c99fe22ad515f218a2da047adf9d3de4885a58bfd32acf37e7d19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"839da000970f131baadf63de6bca060fdd5455d424ffa25a90a2a62e3510d3fd\"" Jan 29 12:43:52.888954 containerd[1470]: time="2025-01-29T12:43:52.888933192Z" level=info msg="StartContainer for \"839da000970f131baadf63de6bca060fdd5455d424ffa25a90a2a62e3510d3fd\"" Jan 29 12:43:52.931045 systemd[1]: Started cri-containerd-839da000970f131baadf63de6bca060fdd5455d424ffa25a90a2a62e3510d3fd.scope - libcontainer container 839da000970f131baadf63de6bca060fdd5455d424ffa25a90a2a62e3510d3fd. Jan 29 12:43:52.940853 containerd[1470]: time="2025-01-29T12:43:52.940694478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsc8t,Uid:e4378013-2ea0-4d40-bc3c-27390dd44474,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0387d2e9f87b744c802236224146814c1ba0b3eac6500edb26a98fb83c879b6\"" Jan 29 12:43:52.947785 containerd[1470]: time="2025-01-29T12:43:52.947743429Z" level=info msg="CreateContainer within sandbox \"d0387d2e9f87b744c802236224146814c1ba0b3eac6500edb26a98fb83c879b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:43:52.979159 containerd[1470]: time="2025-01-29T12:43:52.979107819Z" level=info msg="StartContainer for \"839da000970f131baadf63de6bca060fdd5455d424ffa25a90a2a62e3510d3fd\" returns successfully" Jan 29 12:43:52.984085 containerd[1470]: time="2025-01-29T12:43:52.984044857Z" level=info msg="CreateContainer within sandbox \"d0387d2e9f87b744c802236224146814c1ba0b3eac6500edb26a98fb83c879b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2c5827a047e209f7e21418e9efccc12e169394841881cb2db16be58e0ac2948\"" Jan 29 12:43:52.984956 containerd[1470]: time="2025-01-29T12:43:52.984598915Z" level=info msg="StartContainer for \"c2c5827a047e209f7e21418e9efccc12e169394841881cb2db16be58e0ac2948\"" Jan 29 12:43:53.021383 systemd[1]: Started cri-containerd-c2c5827a047e209f7e21418e9efccc12e169394841881cb2db16be58e0ac2948.scope - libcontainer container c2c5827a047e209f7e21418e9efccc12e169394841881cb2db16be58e0ac2948. Jan 29 12:43:53.058868 containerd[1470]: time="2025-01-29T12:43:53.058816637Z" level=info msg="StartContainer for \"c2c5827a047e209f7e21418e9efccc12e169394841881cb2db16be58e0ac2948\" returns successfully" Jan 29 12:43:53.728883 kubelet[2692]: I0129 12:43:53.728722 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zsc8t" podStartSLOduration=41.728684948 podStartE2EDuration="41.728684948s" podCreationTimestamp="2025-01-29 12:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:43:53.695819629 +0000 UTC m=+56.769253959" watchObservedRunningTime="2025-01-29 12:43:53.728684948 +0000 UTC m=+56.802119268" Jan 29 12:43:53.776655 kubelet[2692]: I0129 12:43:53.776599 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bplk2" podStartSLOduration=41.776580871 podStartE2EDuration="41.776580871s" podCreationTimestamp="2025-01-29 12:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:43:53.728296484 +0000 UTC m=+56.801730854" watchObservedRunningTime="2025-01-29 12:43:53.776580871 +0000 UTC m=+56.850015141" Jan 29 12:44:16.759748 systemd[1]: Started sshd@9-172.24.4.118:22-172.24.4.1:58508.service - OpenSSH per-connection server daemon (172.24.4.1:58508). Jan 29 12:44:18.346402 sshd[4068]: Accepted publickey for core from 172.24.4.1 port 58508 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:18.348715 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:18.356305 systemd-logind[1448]: New session 12 of user core. Jan 29 12:44:18.367452 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:44:19.077141 sshd[4068]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:19.082992 systemd[1]: sshd@9-172.24.4.118:22-172.24.4.1:58508.service: Deactivated successfully. Jan 29 12:44:19.086633 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:44:19.091445 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:44:19.094653 systemd-logind[1448]: Removed session 12. Jan 29 12:44:24.098850 systemd[1]: Started sshd@10-172.24.4.118:22-172.24.4.1:51424.service - OpenSSH per-connection server daemon (172.24.4.1:51424). Jan 29 12:44:25.506322 sshd[4083]: Accepted publickey for core from 172.24.4.1 port 51424 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:25.509344 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:25.519676 systemd-logind[1448]: New session 13 of user core. Jan 29 12:44:25.524574 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:44:26.474776 sshd[4083]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:26.481209 systemd[1]: sshd@10-172.24.4.118:22-172.24.4.1:51424.service: Deactivated successfully. Jan 29 12:44:26.488987 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:44:26.493594 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:44:26.496480 systemd-logind[1448]: Removed session 13. Jan 29 12:44:31.492716 systemd[1]: Started sshd@11-172.24.4.118:22-172.24.4.1:51430.service - OpenSSH per-connection server daemon (172.24.4.1:51430). Jan 29 12:44:33.149765 sshd[4098]: Accepted publickey for core from 172.24.4.1 port 51430 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:33.153588 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:33.163035 systemd-logind[1448]: New session 14 of user core. Jan 29 12:44:33.173620 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:44:33.790007 sshd[4098]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:33.797695 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:44:33.798208 systemd[1]: sshd@11-172.24.4.118:22-172.24.4.1:51430.service: Deactivated successfully. Jan 29 12:44:33.802023 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:44:33.806637 systemd-logind[1448]: Removed session 14. Jan 29 12:44:38.812189 systemd[1]: Started sshd@12-172.24.4.118:22-172.24.4.1:58204.service - OpenSSH per-connection server daemon (172.24.4.1:58204). Jan 29 12:44:40.264827 sshd[4112]: Accepted publickey for core from 172.24.4.1 port 58204 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:40.266967 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:40.273657 systemd-logind[1448]: New session 15 of user core. Jan 29 12:44:40.290505 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:44:40.913565 sshd[4112]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:40.925094 systemd[1]: sshd@12-172.24.4.118:22-172.24.4.1:58204.service: Deactivated successfully. Jan 29 12:44:40.928154 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:44:40.929839 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:44:40.939786 systemd[1]: Started sshd@13-172.24.4.118:22-172.24.4.1:58220.service - OpenSSH per-connection server daemon (172.24.4.1:58220). Jan 29 12:44:40.942057 systemd-logind[1448]: Removed session 15. Jan 29 12:44:42.164318 sshd[4127]: Accepted publickey for core from 172.24.4.1 port 58220 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:42.166876 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:42.177575 systemd-logind[1448]: New session 16 of user core. Jan 29 12:44:42.189866 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:44:42.996154 sshd[4127]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:43.007492 systemd[1]: sshd@13-172.24.4.118:22-172.24.4.1:58220.service: Deactivated successfully. Jan 29 12:44:43.011013 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:44:43.013133 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:44:43.021884 systemd[1]: Started sshd@14-172.24.4.118:22-172.24.4.1:58228.service - OpenSSH per-connection server daemon (172.24.4.1:58228). Jan 29 12:44:43.025980 systemd-logind[1448]: Removed session 16. Jan 29 12:44:44.413226 sshd[4138]: Accepted publickey for core from 172.24.4.1 port 58228 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:44.416467 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:44.426165 systemd-logind[1448]: New session 17 of user core. Jan 29 12:44:44.433588 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:44:45.161300 sshd[4138]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:45.165355 systemd[1]: sshd@14-172.24.4.118:22-172.24.4.1:58228.service: Deactivated successfully. Jan 29 12:44:45.166969 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:44:45.168821 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:44:45.170657 systemd-logind[1448]: Removed session 17. Jan 29 12:44:50.187861 systemd[1]: Started sshd@15-172.24.4.118:22-172.24.4.1:57118.service - OpenSSH per-connection server daemon (172.24.4.1:57118). Jan 29 12:44:51.419320 sshd[4153]: Accepted publickey for core from 172.24.4.1 port 57118 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:51.422957 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:51.439282 systemd-logind[1448]: New session 18 of user core. Jan 29 12:44:51.445660 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:44:52.231213 sshd[4153]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:52.244822 systemd[1]: sshd@15-172.24.4.118:22-172.24.4.1:57118.service: Deactivated successfully. Jan 29 12:44:52.249939 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:44:52.253877 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:44:52.260517 systemd[1]: Started sshd@16-172.24.4.118:22-172.24.4.1:57132.service - OpenSSH per-connection server daemon (172.24.4.1:57132). Jan 29 12:44:52.265134 systemd-logind[1448]: Removed session 18. Jan 29 12:44:53.464008 sshd[4166]: Accepted publickey for core from 172.24.4.1 port 57132 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:53.467085 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:53.478186 systemd-logind[1448]: New session 19 of user core. Jan 29 12:44:53.488582 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:44:54.434634 sshd[4166]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:54.446889 systemd[1]: sshd@16-172.24.4.118:22-172.24.4.1:57132.service: Deactivated successfully. Jan 29 12:44:54.451205 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:44:54.454848 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:44:54.468304 systemd[1]: Started sshd@17-172.24.4.118:22-172.24.4.1:57244.service - OpenSSH per-connection server daemon (172.24.4.1:57244). Jan 29 12:44:54.470860 systemd-logind[1448]: Removed session 19. Jan 29 12:44:55.746893 sshd[4177]: Accepted publickey for core from 172.24.4.1 port 57244 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:44:55.749072 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:44:55.756869 systemd-logind[1448]: New session 20 of user core. Jan 29 12:44:55.762560 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:44:58.606897 sshd[4177]: pam_unix(sshd:session): session closed for user core Jan 29 12:44:58.616829 systemd[1]: sshd@17-172.24.4.118:22-172.24.4.1:57244.service: Deactivated successfully. Jan 29 12:44:58.620056 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:44:58.622986 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:44:58.629847 systemd[1]: Started sshd@18-172.24.4.118:22-172.24.4.1:57260.service - OpenSSH per-connection server daemon (172.24.4.1:57260). Jan 29 12:44:58.631816 systemd-logind[1448]: Removed session 20. Jan 29 12:45:00.051398 sshd[4197]: Accepted publickey for core from 172.24.4.1 port 57260 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:00.055190 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:00.067599 systemd-logind[1448]: New session 21 of user core. Jan 29 12:45:00.080620 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:45:01.076399 sshd[4197]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:01.084813 systemd[1]: sshd@18-172.24.4.118:22-172.24.4.1:57260.service: Deactivated successfully. Jan 29 12:45:01.086056 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:45:01.088135 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:45:01.094865 systemd[1]: Started sshd@19-172.24.4.118:22-172.24.4.1:57270.service - OpenSSH per-connection server daemon (172.24.4.1:57270). Jan 29 12:45:01.097325 systemd-logind[1448]: Removed session 21. Jan 29 12:45:02.238062 sshd[4208]: Accepted publickey for core from 172.24.4.1 port 57270 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:02.241289 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:02.252719 systemd-logind[1448]: New session 22 of user core. Jan 29 12:45:02.257560 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:45:02.851598 sshd[4208]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:02.858076 systemd[1]: sshd@19-172.24.4.118:22-172.24.4.1:57270.service: Deactivated successfully. Jan 29 12:45:02.861699 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:45:02.864041 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:45:02.866544 systemd-logind[1448]: Removed session 22. Jan 29 12:45:07.874776 systemd[1]: Started sshd@20-172.24.4.118:22-172.24.4.1:49284.service - OpenSSH per-connection server daemon (172.24.4.1:49284). Jan 29 12:45:09.241378 sshd[4224]: Accepted publickey for core from 172.24.4.1 port 49284 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:09.244384 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:09.258049 systemd-logind[1448]: New session 23 of user core. Jan 29 12:45:09.274753 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:45:10.004616 sshd[4224]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:10.010852 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:45:10.012543 systemd[1]: sshd@20-172.24.4.118:22-172.24.4.1:49284.service: Deactivated successfully. Jan 29 12:45:10.016170 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:45:10.019488 systemd-logind[1448]: Removed session 23. Jan 29 12:45:15.030923 systemd[1]: Started sshd@21-172.24.4.118:22-172.24.4.1:59138.service - OpenSSH per-connection server daemon (172.24.4.1:59138). Jan 29 12:45:16.369281 sshd[4239]: Accepted publickey for core from 172.24.4.1 port 59138 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:16.372214 sshd[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:16.381485 systemd-logind[1448]: New session 24 of user core. Jan 29 12:45:16.394593 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:45:17.062665 sshd[4239]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:17.066535 systemd[1]: sshd@21-172.24.4.118:22-172.24.4.1:59138.service: Deactivated successfully. Jan 29 12:45:17.069543 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:45:17.070654 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:45:17.071972 systemd-logind[1448]: Removed session 24. Jan 29 12:45:22.091790 systemd[1]: Started sshd@22-172.24.4.118:22-172.24.4.1:59152.service - OpenSSH per-connection server daemon (172.24.4.1:59152). Jan 29 12:45:23.607507 sshd[4252]: Accepted publickey for core from 172.24.4.1 port 59152 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:23.610562 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:23.621585 systemd-logind[1448]: New session 25 of user core. Jan 29 12:45:23.628066 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:45:24.386382 sshd[4252]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:24.397714 systemd[1]: sshd@22-172.24.4.118:22-172.24.4.1:59152.service: Deactivated successfully. Jan 29 12:45:24.404903 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:45:24.408193 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:45:24.415853 systemd[1]: Started sshd@23-172.24.4.118:22-172.24.4.1:35650.service - OpenSSH per-connection server daemon (172.24.4.1:35650). Jan 29 12:45:24.418922 systemd-logind[1448]: Removed session 25. Jan 29 12:45:25.625943 sshd[4265]: Accepted publickey for core from 172.24.4.1 port 35650 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:25.628177 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:25.637686 systemd-logind[1448]: New session 26 of user core. Jan 29 12:45:25.642511 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:45:28.367497 containerd[1470]: time="2025-01-29T12:45:28.367345785Z" level=info msg="StopContainer for \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\" with timeout 30 (s)" Jan 29 12:45:28.385438 systemd[1]: run-containerd-runc-k8s.io-aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5-runc.G8mLEG.mount: Deactivated successfully. Jan 29 12:45:28.388516 containerd[1470]: time="2025-01-29T12:45:28.387185304Z" level=info msg="Stop container \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\" with signal terminated" Jan 29 12:45:28.401609 containerd[1470]: time="2025-01-29T12:45:28.401558850Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:45:28.402696 systemd[1]: cri-containerd-7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628.scope: Deactivated successfully. Jan 29 12:45:28.410000 containerd[1470]: time="2025-01-29T12:45:28.409951449Z" level=info msg="StopContainer for \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\" with timeout 2 (s)" Jan 29 12:45:28.410604 containerd[1470]: time="2025-01-29T12:45:28.410583079Z" level=info msg="Stop container \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\" with signal terminated" Jan 29 12:45:28.422159 systemd-networkd[1360]: lxc_health: Link DOWN Jan 29 12:45:28.422525 systemd-networkd[1360]: lxc_health: Lost carrier Jan 29 12:45:28.443427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628-rootfs.mount: Deactivated successfully. Jan 29 12:45:28.445018 systemd[1]: cri-containerd-aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5.scope: Deactivated successfully. Jan 29 12:45:28.445441 systemd[1]: cri-containerd-aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5.scope: Consumed 8.102s CPU time. Jan 29 12:45:28.451549 containerd[1470]: time="2025-01-29T12:45:28.451388384Z" level=info msg="shim disconnected" id=7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628 namespace=k8s.io Jan 29 12:45:28.451549 containerd[1470]: time="2025-01-29T12:45:28.451449851Z" level=warning msg="cleaning up after shim disconnected" id=7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628 namespace=k8s.io Jan 29 12:45:28.451549 containerd[1470]: time="2025-01-29T12:45:28.451461003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:45:28.476359 containerd[1470]: time="2025-01-29T12:45:28.476315629Z" level=info msg="StopContainer for \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\" returns successfully" Jan 29 12:45:28.479886 containerd[1470]: time="2025-01-29T12:45:28.477803269Z" level=info msg="StopPodSandbox for \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\"" Jan 29 12:45:28.479886 containerd[1470]: time="2025-01-29T12:45:28.477855700Z" level=info msg="Container to stop \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:45:28.479793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3-shm.mount: Deactivated successfully. Jan 29 12:45:28.485150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5-rootfs.mount: Deactivated successfully. Jan 29 12:45:28.493532 systemd[1]: cri-containerd-dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3.scope: Deactivated successfully. Jan 29 12:45:28.546797 containerd[1470]: time="2025-01-29T12:45:28.545955215Z" level=info msg="shim disconnected" id=dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3 namespace=k8s.io Jan 29 12:45:28.546797 containerd[1470]: time="2025-01-29T12:45:28.546352736Z" level=warning msg="cleaning up after shim disconnected" id=dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3 namespace=k8s.io Jan 29 12:45:28.546797 containerd[1470]: time="2025-01-29T12:45:28.546382733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:45:28.547160 containerd[1470]: time="2025-01-29T12:45:28.547091911Z" level=info msg="shim disconnected" id=aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5 namespace=k8s.io Jan 29 12:45:28.547280 containerd[1470]: time="2025-01-29T12:45:28.547260454Z" level=warning msg="cleaning up after shim disconnected" id=aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5 namespace=k8s.io Jan 29 12:45:28.547361 containerd[1470]: time="2025-01-29T12:45:28.547345839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:45:28.573741 containerd[1470]: time="2025-01-29T12:45:28.573658728Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:45:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:45:28.578683 containerd[1470]: time="2025-01-29T12:45:28.578654209Z" level=info msg="TearDown network for sandbox \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\" successfully" Jan 29 12:45:28.578683 containerd[1470]: time="2025-01-29T12:45:28.578677313Z" level=info msg="StopPodSandbox for \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\" returns successfully" Jan 29 12:45:28.585440 containerd[1470]: time="2025-01-29T12:45:28.585404681Z" level=info msg="StopContainer for \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\" returns successfully" Jan 29 12:45:28.586291 containerd[1470]: time="2025-01-29T12:45:28.586000723Z" level=info msg="StopPodSandbox for \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\"" Jan 29 12:45:28.586291 containerd[1470]: time="2025-01-29T12:45:28.586044437Z" level=info msg="Container to stop \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:45:28.586291 containerd[1470]: time="2025-01-29T12:45:28.586058635Z" level=info msg="Container to stop \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:45:28.586291 containerd[1470]: time="2025-01-29T12:45:28.586069956Z" level=info msg="Container to stop \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:45:28.586291 containerd[1470]: time="2025-01-29T12:45:28.586081048Z" level=info msg="Container to stop \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:45:28.586291 containerd[1470]: time="2025-01-29T12:45:28.586092229Z" level=info msg="Container to stop \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:45:28.594374 systemd[1]: cri-containerd-f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870.scope: Deactivated successfully. Jan 29 12:45:28.609744 kubelet[2692]: I0129 12:45:28.608111 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2b8f2e1-2854-4f82-ad46-821e44d64f8b-cilium-config-path\") pod \"c2b8f2e1-2854-4f82-ad46-821e44d64f8b\" (UID: \"c2b8f2e1-2854-4f82-ad46-821e44d64f8b\") " Jan 29 12:45:28.609744 kubelet[2692]: I0129 12:45:28.609642 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd4pk\" (UniqueName: \"kubernetes.io/projected/c2b8f2e1-2854-4f82-ad46-821e44d64f8b-kube-api-access-nd4pk\") pod \"c2b8f2e1-2854-4f82-ad46-821e44d64f8b\" (UID: \"c2b8f2e1-2854-4f82-ad46-821e44d64f8b\") " Jan 29 12:45:28.613904 kubelet[2692]: I0129 12:45:28.613400 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b8f2e1-2854-4f82-ad46-821e44d64f8b-kube-api-access-nd4pk" (OuterVolumeSpecName: "kube-api-access-nd4pk") pod "c2b8f2e1-2854-4f82-ad46-821e44d64f8b" (UID: "c2b8f2e1-2854-4f82-ad46-821e44d64f8b"). InnerVolumeSpecName "kube-api-access-nd4pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:45:28.613904 kubelet[2692]: I0129 12:45:28.613871 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2b8f2e1-2854-4f82-ad46-821e44d64f8b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2b8f2e1-2854-4f82-ad46-821e44d64f8b" (UID: "c2b8f2e1-2854-4f82-ad46-821e44d64f8b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:45:28.646654 containerd[1470]: time="2025-01-29T12:45:28.645563503Z" level=info msg="shim disconnected" id=f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870 namespace=k8s.io Jan 29 12:45:28.646654 containerd[1470]: time="2025-01-29T12:45:28.645643046Z" level=warning msg="cleaning up after shim disconnected" id=f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870 namespace=k8s.io Jan 29 12:45:28.646654 containerd[1470]: time="2025-01-29T12:45:28.645657323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:45:28.661502 containerd[1470]: time="2025-01-29T12:45:28.661419791Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:45:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:45:28.663089 containerd[1470]: time="2025-01-29T12:45:28.662725482Z" level=info msg="TearDown network for sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" successfully" Jan 29 12:45:28.663089 containerd[1470]: time="2025-01-29T12:45:28.662754097Z" level=info msg="StopPodSandbox for \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" returns successfully" Jan 29 12:45:28.710070 kubelet[2692]: I0129 12:45:28.709979 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cni-path\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710070 kubelet[2692]: I0129 12:45:28.710063 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cni-path" (OuterVolumeSpecName: "cni-path") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.710328 kubelet[2692]: I0129 12:45:28.710138 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-etc-cni-netd\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710328 kubelet[2692]: I0129 12:45:28.710162 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-bpf-maps\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710328 kubelet[2692]: I0129 12:45:28.710201 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.710328 kubelet[2692]: I0129 12:45:28.710258 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.710328 kubelet[2692]: I0129 12:45:28.710287 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-hubble-tls\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710328 kubelet[2692]: I0129 12:45:28.710306 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-xtables-lock\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710616 kubelet[2692]: I0129 12:45:28.710581 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-config-path\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710616 kubelet[2692]: I0129 12:45:28.710606 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-cgroup\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710616 kubelet[2692]: I0129 12:45:28.710627 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-lib-modules\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710836 kubelet[2692]: I0129 12:45:28.710647 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-run\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710836 kubelet[2692]: I0129 12:45:28.710674 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cdbe399-d451-48b6-b761-0a0a74024888-clustermesh-secrets\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710836 kubelet[2692]: I0129 12:45:28.710701 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzdfn\" (UniqueName: \"kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-kube-api-access-tzdfn\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710836 kubelet[2692]: I0129 12:45:28.710719 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-host-proc-sys-kernel\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710836 kubelet[2692]: I0129 12:45:28.710736 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-hostproc\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.710836 kubelet[2692]: I0129 12:45:28.710753 2692 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-host-proc-sys-net\") pod \"9cdbe399-d451-48b6-b761-0a0a74024888\" (UID: \"9cdbe399-d451-48b6-b761-0a0a74024888\") " Jan 29 12:45:28.711128 kubelet[2692]: I0129 12:45:28.710783 2692 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2b8f2e1-2854-4f82-ad46-821e44d64f8b-cilium-config-path\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.711128 kubelet[2692]: I0129 12:45:28.710795 2692 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nd4pk\" (UniqueName: \"kubernetes.io/projected/c2b8f2e1-2854-4f82-ad46-821e44d64f8b-kube-api-access-nd4pk\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.711128 kubelet[2692]: I0129 12:45:28.710804 2692 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cni-path\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.711128 kubelet[2692]: I0129 12:45:28.710816 2692 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-etc-cni-netd\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.711128 kubelet[2692]: I0129 12:45:28.710825 2692 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-bpf-maps\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.711128 kubelet[2692]: I0129 12:45:28.710847 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.711530 kubelet[2692]: I0129 12:45:28.710865 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.713275 kubelet[2692]: I0129 12:45:28.713216 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:45:28.713488 kubelet[2692]: I0129 12:45:28.713293 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.713488 kubelet[2692]: I0129 12:45:28.713311 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.713488 kubelet[2692]: I0129 12:45:28.713328 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.713488 kubelet[2692]: I0129 12:45:28.713387 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.714172 kubelet[2692]: I0129 12:45:28.714022 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-hostproc" (OuterVolumeSpecName: "hostproc") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:45:28.715821 kubelet[2692]: I0129 12:45:28.715775 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:45:28.716050 kubelet[2692]: I0129 12:45:28.715875 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe399-d451-48b6-b761-0a0a74024888-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:45:28.717681 kubelet[2692]: I0129 12:45:28.717637 2692 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-kube-api-access-tzdfn" (OuterVolumeSpecName: "kube-api-access-tzdfn") pod "9cdbe399-d451-48b6-b761-0a0a74024888" (UID: "9cdbe399-d451-48b6-b761-0a0a74024888"). InnerVolumeSpecName "kube-api-access-tzdfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:45:28.811737 kubelet[2692]: I0129 12:45:28.811529 2692 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-xtables-lock\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.812134 kubelet[2692]: I0129 12:45:28.811867 2692 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-config-path\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.812612 kubelet[2692]: I0129 12:45:28.812324 2692 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-cgroup\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.812612 kubelet[2692]: I0129 12:45:28.812394 2692 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-lib-modules\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.812612 kubelet[2692]: I0129 12:45:28.812421 2692 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-cilium-run\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.812612 kubelet[2692]: I0129 12:45:28.812444 2692 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cdbe399-d451-48b6-b761-0a0a74024888-clustermesh-secrets\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.812612 kubelet[2692]: I0129 12:45:28.812469 2692 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tzdfn\" (UniqueName: \"kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-kube-api-access-tzdfn\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.812612 kubelet[2692]: I0129 12:45:28.812493 2692 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-host-proc-sys-kernel\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.812612 kubelet[2692]: I0129 12:45:28.812517 2692 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-hostproc\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.813088 kubelet[2692]: I0129 12:45:28.812542 2692 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cdbe399-d451-48b6-b761-0a0a74024888-host-proc-sys-net\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.813088 kubelet[2692]: I0129 12:45:28.812564 2692 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cdbe399-d451-48b6-b761-0a0a74024888-hubble-tls\") on node \"ci-4081-3-0-e-97e17aa81b.novalocal\" DevicePath \"\"" Jan 29 12:45:28.976212 kubelet[2692]: I0129 12:45:28.976080 2692 scope.go:117] "RemoveContainer" containerID="7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628" Jan 29 12:45:28.982391 containerd[1470]: time="2025-01-29T12:45:28.981607696Z" level=info msg="RemoveContainer for \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\"" Jan 29 12:45:28.992811 systemd[1]: Removed slice kubepods-besteffort-podc2b8f2e1_2854_4f82_ad46_821e44d64f8b.slice - libcontainer container kubepods-besteffort-podc2b8f2e1_2854_4f82_ad46_821e44d64f8b.slice. Jan 29 12:45:29.009655 systemd[1]: Removed slice kubepods-burstable-pod9cdbe399_d451_48b6_b761_0a0a74024888.slice - libcontainer container kubepods-burstable-pod9cdbe399_d451_48b6_b761_0a0a74024888.slice. Jan 29 12:45:29.009977 systemd[1]: kubepods-burstable-pod9cdbe399_d451_48b6_b761_0a0a74024888.slice: Consumed 8.195s CPU time. Jan 29 12:45:29.066087 containerd[1470]: time="2025-01-29T12:45:29.065917708Z" level=info msg="RemoveContainer for \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\" returns successfully" Jan 29 12:45:29.066949 kubelet[2692]: I0129 12:45:29.066559 2692 scope.go:117] "RemoveContainer" containerID="7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628" Jan 29 12:45:29.067142 containerd[1470]: time="2025-01-29T12:45:29.067027735Z" level=error msg="ContainerStatus for \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\": not found" Jan 29 12:45:29.068090 kubelet[2692]: E0129 12:45:29.067376 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\": not found" containerID="7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628" Jan 29 12:45:29.068090 kubelet[2692]: I0129 12:45:29.067500 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628"} err="failed to get container status \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ad271a5591fa70dc5b0686ae927187b8983b55672b517a825e9a7c00eb44628\": not found" Jan 29 12:45:29.068090 kubelet[2692]: I0129 12:45:29.067777 2692 scope.go:117] "RemoveContainer" containerID="aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5" Jan 29 12:45:29.073496 containerd[1470]: time="2025-01-29T12:45:29.072266478Z" level=info msg="RemoveContainer for \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\"" Jan 29 12:45:29.087708 containerd[1470]: time="2025-01-29T12:45:29.087624354Z" level=info msg="RemoveContainer for \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\" returns successfully" Jan 29 12:45:29.089843 kubelet[2692]: I0129 12:45:29.089782 2692 scope.go:117] "RemoveContainer" containerID="3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd" Jan 29 12:45:29.096354 containerd[1470]: time="2025-01-29T12:45:29.096047969Z" level=info msg="RemoveContainer for \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\"" Jan 29 12:45:29.102603 containerd[1470]: time="2025-01-29T12:45:29.102430213Z" level=info msg="RemoveContainer for \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\" returns successfully" Jan 29 12:45:29.102898 kubelet[2692]: I0129 12:45:29.102752 2692 scope.go:117] "RemoveContainer" containerID="f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f" Jan 29 12:45:29.105265 containerd[1470]: time="2025-01-29T12:45:29.105176723Z" level=info msg="RemoveContainer for \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\"" Jan 29 12:45:29.112236 containerd[1470]: time="2025-01-29T12:45:29.112175880Z" level=info msg="RemoveContainer for \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\" returns successfully" Jan 29 12:45:29.112719 kubelet[2692]: I0129 12:45:29.112684 2692 scope.go:117] "RemoveContainer" containerID="fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706" Jan 29 12:45:29.114588 containerd[1470]: time="2025-01-29T12:45:29.114546079Z" level=info msg="RemoveContainer for \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\"" Jan 29 12:45:29.119713 containerd[1470]: time="2025-01-29T12:45:29.119662309Z" level=info msg="RemoveContainer for \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\" returns successfully" Jan 29 12:45:29.119988 kubelet[2692]: I0129 12:45:29.119927 2692 scope.go:117] "RemoveContainer" containerID="70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d" Jan 29 12:45:29.121887 containerd[1470]: time="2025-01-29T12:45:29.121832705Z" level=info msg="RemoveContainer for \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\"" Jan 29 12:45:29.126158 containerd[1470]: time="2025-01-29T12:45:29.126107984Z" level=info msg="RemoveContainer for \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\" returns successfully" Jan 29 12:45:29.126525 kubelet[2692]: I0129 12:45:29.126463 2692 scope.go:117] "RemoveContainer" containerID="aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5" Jan 29 12:45:29.126907 containerd[1470]: time="2025-01-29T12:45:29.126858432Z" level=error msg="ContainerStatus for \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\": not found" Jan 29 12:45:29.127165 kubelet[2692]: E0129 12:45:29.127048 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\": not found" containerID="aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5" Jan 29 12:45:29.127165 kubelet[2692]: I0129 12:45:29.127093 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5"} err="failed to get container status \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa517b55762dbe701bc26c056436a71210050843d1e3fc75da551a159b962bb5\": not found" Jan 29 12:45:29.127165 kubelet[2692]: I0129 12:45:29.127114 2692 scope.go:117] "RemoveContainer" containerID="3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd" Jan 29 12:45:29.127629 containerd[1470]: time="2025-01-29T12:45:29.127464663Z" level=error msg="ContainerStatus for \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\": not found" Jan 29 12:45:29.127761 kubelet[2692]: E0129 12:45:29.127731 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\": not found" containerID="3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd" Jan 29 12:45:29.127822 kubelet[2692]: I0129 12:45:29.127777 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd"} err="failed to get container status \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e863a03b058cd2d89bf99e671182b48031dcc69946d202793b053b1dc191afd\": not found" Jan 29 12:45:29.127822 kubelet[2692]: I0129 12:45:29.127815 2692 scope.go:117] "RemoveContainer" containerID="f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f" Jan 29 12:45:29.128113 containerd[1470]: time="2025-01-29T12:45:29.128067126Z" level=error msg="ContainerStatus for \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\": not found" Jan 29 12:45:29.128456 kubelet[2692]: E0129 12:45:29.128296 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\": not found" containerID="f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f" Jan 29 12:45:29.128456 kubelet[2692]: I0129 12:45:29.128345 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f"} err="failed to get container status \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3a4735f1032a798c8611ca61cd5044f5777d269e827426cfe23cdb41e6ff86f\": not found" Jan 29 12:45:29.128456 kubelet[2692]: I0129 12:45:29.128366 2692 scope.go:117] "RemoveContainer" containerID="fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706" Jan 29 12:45:29.128672 containerd[1470]: time="2025-01-29T12:45:29.128596320Z" level=error msg="ContainerStatus for \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\": not found" Jan 29 12:45:29.128820 kubelet[2692]: E0129 12:45:29.128785 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\": not found" containerID="fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706" Jan 29 12:45:29.128867 kubelet[2692]: I0129 12:45:29.128828 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706"} err="failed to get container status \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb0ac2be4f60b33e89b543524ab60acdd3005e2d94e438c0ce2f461b3f82a706\": not found" Jan 29 12:45:29.128867 kubelet[2692]: I0129 12:45:29.128858 2692 scope.go:117] "RemoveContainer" containerID="70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d" Jan 29 12:45:29.129204 containerd[1470]: time="2025-01-29T12:45:29.129156343Z" level=error msg="ContainerStatus for \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\": not found" Jan 29 12:45:29.129436 kubelet[2692]: E0129 12:45:29.129392 2692 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\": not found" containerID="70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d" Jan 29 12:45:29.129436 kubelet[2692]: I0129 12:45:29.129418 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d"} err="failed to get container status \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"70d18d58be4a1601fe2c216faa55b943e8a432f6d5cf526876bc792fc0d5be3d\": not found" Jan 29 12:45:29.375867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870-rootfs.mount: Deactivated successfully. Jan 29 12:45:29.376102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870-shm.mount: Deactivated successfully. Jan 29 12:45:29.376321 systemd[1]: var-lib-kubelet-pods-9cdbe399\x2dd451\x2d48b6\x2db761\x2d0a0a74024888-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 12:45:29.376564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3-rootfs.mount: Deactivated successfully. Jan 29 12:45:29.376717 systemd[1]: var-lib-kubelet-pods-9cdbe399\x2dd451\x2d48b6\x2db761\x2d0a0a74024888-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtzdfn.mount: Deactivated successfully. Jan 29 12:45:29.377392 systemd[1]: var-lib-kubelet-pods-c2b8f2e1\x2d2854\x2d4f82\x2dad46\x2d821e44d64f8b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnd4pk.mount: Deactivated successfully. Jan 29 12:45:29.377583 systemd[1]: var-lib-kubelet-pods-9cdbe399\x2dd451\x2d48b6\x2db761\x2d0a0a74024888-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 12:45:29.422504 kubelet[2692]: I0129 12:45:29.422452 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cdbe399-d451-48b6-b761-0a0a74024888" path="/var/lib/kubelet/pods/9cdbe399-d451-48b6-b761-0a0a74024888/volumes" Jan 29 12:45:29.423934 kubelet[2692]: I0129 12:45:29.423886 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b8f2e1-2854-4f82-ad46-821e44d64f8b" path="/var/lib/kubelet/pods/c2b8f2e1-2854-4f82-ad46-821e44d64f8b/volumes" Jan 29 12:45:30.440798 sshd[4265]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:30.454214 systemd[1]: sshd@23-172.24.4.118:22-172.24.4.1:35650.service: Deactivated successfully. Jan 29 12:45:30.458762 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:45:30.459224 systemd[1]: session-26.scope: Consumed 1.736s CPU time. Jan 29 12:45:30.463419 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:45:30.471641 systemd[1]: Started sshd@24-172.24.4.118:22-172.24.4.1:35660.service - OpenSSH per-connection server daemon (172.24.4.1:35660). Jan 29 12:45:30.474996 systemd-logind[1448]: Removed session 26. Jan 29 12:45:31.491747 sshd[4428]: Accepted publickey for core from 172.24.4.1 port 35660 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:31.494222 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:31.505368 systemd-logind[1448]: New session 27 of user core. Jan 29 12:45:31.512599 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:45:32.574057 kubelet[2692]: E0129 12:45:32.573999 2692 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:45:33.044282 kubelet[2692]: I0129 12:45:33.044205 2692 topology_manager.go:215] "Topology Admit Handler" podUID="3d05946e-ae0c-495d-b518-4e97baeb2cb4" podNamespace="kube-system" podName="cilium-p65zw" Jan 29 12:45:33.044282 kubelet[2692]: E0129 12:45:33.044284 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cdbe399-d451-48b6-b761-0a0a74024888" containerName="apply-sysctl-overwrites" Jan 29 12:45:33.044282 kubelet[2692]: E0129 12:45:33.044295 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cdbe399-d451-48b6-b761-0a0a74024888" containerName="mount-bpf-fs" Jan 29 12:45:33.044502 kubelet[2692]: E0129 12:45:33.044303 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2b8f2e1-2854-4f82-ad46-821e44d64f8b" containerName="cilium-operator" Jan 29 12:45:33.044502 kubelet[2692]: E0129 12:45:33.044310 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cdbe399-d451-48b6-b761-0a0a74024888" containerName="mount-cgroup" Jan 29 12:45:33.044502 kubelet[2692]: E0129 12:45:33.044317 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cdbe399-d451-48b6-b761-0a0a74024888" containerName="clean-cilium-state" Jan 29 12:45:33.044502 kubelet[2692]: E0129 12:45:33.044324 2692 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cdbe399-d451-48b6-b761-0a0a74024888" containerName="cilium-agent" Jan 29 12:45:33.044502 kubelet[2692]: I0129 12:45:33.044347 2692 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b8f2e1-2854-4f82-ad46-821e44d64f8b" containerName="cilium-operator" Jan 29 12:45:33.044502 kubelet[2692]: I0129 12:45:33.044353 2692 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdbe399-d451-48b6-b761-0a0a74024888" containerName="cilium-agent" Jan 29 12:45:33.055712 systemd[1]: Created slice kubepods-burstable-pod3d05946e_ae0c_495d_b518_4e97baeb2cb4.slice - libcontainer container kubepods-burstable-pod3d05946e_ae0c_495d_b518_4e97baeb2cb4.slice. Jan 29 12:45:33.140630 kubelet[2692]: I0129 12:45:33.140515 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-cilium-cgroup\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.140630 kubelet[2692]: I0129 12:45:33.140588 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-etc-cni-netd\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.140872 kubelet[2692]: I0129 12:45:33.140654 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-lib-modules\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.140872 kubelet[2692]: I0129 12:45:33.140726 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-xtables-lock\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.140872 kubelet[2692]: I0129 12:45:33.140754 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d05946e-ae0c-495d-b518-4e97baeb2cb4-cilium-config-path\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.140872 kubelet[2692]: I0129 12:45:33.140812 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-hostproc\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.140872 kubelet[2692]: I0129 12:45:33.140833 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-cni-path\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.141162 kubelet[2692]: I0129 12:45:33.140896 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-host-proc-sys-net\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.141162 kubelet[2692]: I0129 12:45:33.140917 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d05946e-ae0c-495d-b518-4e97baeb2cb4-hubble-tls\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.141162 kubelet[2692]: I0129 12:45:33.140980 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d05946e-ae0c-495d-b518-4e97baeb2cb4-cilium-ipsec-secrets\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.141162 kubelet[2692]: I0129 12:45:33.141003 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-host-proc-sys-kernel\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.141162 kubelet[2692]: I0129 12:45:33.141071 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-bpf-maps\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.141480 kubelet[2692]: I0129 12:45:33.141097 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnml\" (UniqueName: \"kubernetes.io/projected/3d05946e-ae0c-495d-b518-4e97baeb2cb4-kube-api-access-pfnml\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.141480 kubelet[2692]: I0129 12:45:33.141153 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d05946e-ae0c-495d-b518-4e97baeb2cb4-cilium-run\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.141480 kubelet[2692]: I0129 12:45:33.141174 2692 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d05946e-ae0c-495d-b518-4e97baeb2cb4-clustermesh-secrets\") pod \"cilium-p65zw\" (UID: \"3d05946e-ae0c-495d-b518-4e97baeb2cb4\") " pod="kube-system/cilium-p65zw" Jan 29 12:45:33.166123 sshd[4428]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:33.177025 systemd[1]: sshd@24-172.24.4.118:22-172.24.4.1:35660.service: Deactivated successfully. Jan 29 12:45:33.179959 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:45:33.181501 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:45:33.188540 systemd[1]: Started sshd@25-172.24.4.118:22-172.24.4.1:35670.service - OpenSSH per-connection server daemon (172.24.4.1:35670). Jan 29 12:45:33.190380 systemd-logind[1448]: Removed session 27. Jan 29 12:45:33.363079 containerd[1470]: time="2025-01-29T12:45:33.362881572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p65zw,Uid:3d05946e-ae0c-495d-b518-4e97baeb2cb4,Namespace:kube-system,Attempt:0,}" Jan 29 12:45:33.416502 containerd[1470]: time="2025-01-29T12:45:33.415808287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:45:33.416502 containerd[1470]: time="2025-01-29T12:45:33.415938045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:45:33.416502 containerd[1470]: time="2025-01-29T12:45:33.415996797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:45:33.416502 containerd[1470]: time="2025-01-29T12:45:33.416200337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:45:33.472419 systemd[1]: Started cri-containerd-a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f.scope - libcontainer container a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f. Jan 29 12:45:33.495527 containerd[1470]: time="2025-01-29T12:45:33.495472869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p65zw,Uid:3d05946e-ae0c-495d-b518-4e97baeb2cb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\"" Jan 29 12:45:33.500068 containerd[1470]: time="2025-01-29T12:45:33.499905873Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:45:33.515734 containerd[1470]: time="2025-01-29T12:45:33.515682124Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0fe79872791c84a0005a8a10c0a336efb46eec2c2ea923c2c6253aaeecb8071\"" Jan 29 12:45:33.517131 containerd[1470]: time="2025-01-29T12:45:33.517105337Z" level=info msg="StartContainer for \"b0fe79872791c84a0005a8a10c0a336efb46eec2c2ea923c2c6253aaeecb8071\"" Jan 29 12:45:33.549420 systemd[1]: Started cri-containerd-b0fe79872791c84a0005a8a10c0a336efb46eec2c2ea923c2c6253aaeecb8071.scope - libcontainer container b0fe79872791c84a0005a8a10c0a336efb46eec2c2ea923c2c6253aaeecb8071. Jan 29 12:45:33.580366 containerd[1470]: time="2025-01-29T12:45:33.579734189Z" level=info msg="StartContainer for \"b0fe79872791c84a0005a8a10c0a336efb46eec2c2ea923c2c6253aaeecb8071\" returns successfully" Jan 29 12:45:33.586830 systemd[1]: cri-containerd-b0fe79872791c84a0005a8a10c0a336efb46eec2c2ea923c2c6253aaeecb8071.scope: Deactivated successfully. Jan 29 12:45:33.625614 containerd[1470]: time="2025-01-29T12:45:33.625465434Z" level=info msg="shim disconnected" id=b0fe79872791c84a0005a8a10c0a336efb46eec2c2ea923c2c6253aaeecb8071 namespace=k8s.io Jan 29 12:45:33.625614 containerd[1470]: time="2025-01-29T12:45:33.625515469Z" level=warning msg="cleaning up after shim disconnected" id=b0fe79872791c84a0005a8a10c0a336efb46eec2c2ea923c2c6253aaeecb8071 namespace=k8s.io Jan 29 12:45:33.625614 containerd[1470]: time="2025-01-29T12:45:33.625525720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:45:34.018005 containerd[1470]: time="2025-01-29T12:45:34.017908480Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:45:34.079064 containerd[1470]: time="2025-01-29T12:45:34.078824350Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"85281ab516814325ea13d29a7d504c035686ab854c503b09eaa3302d0e3bc23d\"" Jan 29 12:45:34.080566 containerd[1470]: time="2025-01-29T12:45:34.080075174Z" level=info msg="StartContainer for \"85281ab516814325ea13d29a7d504c035686ab854c503b09eaa3302d0e3bc23d\"" Jan 29 12:45:34.125442 systemd[1]: Started cri-containerd-85281ab516814325ea13d29a7d504c035686ab854c503b09eaa3302d0e3bc23d.scope - libcontainer container 85281ab516814325ea13d29a7d504c035686ab854c503b09eaa3302d0e3bc23d. Jan 29 12:45:34.160251 containerd[1470]: time="2025-01-29T12:45:34.159735436Z" level=info msg="StartContainer for \"85281ab516814325ea13d29a7d504c035686ab854c503b09eaa3302d0e3bc23d\" returns successfully" Jan 29 12:45:34.162430 systemd[1]: cri-containerd-85281ab516814325ea13d29a7d504c035686ab854c503b09eaa3302d0e3bc23d.scope: Deactivated successfully. Jan 29 12:45:34.191673 containerd[1470]: time="2025-01-29T12:45:34.191551318Z" level=info msg="shim disconnected" id=85281ab516814325ea13d29a7d504c035686ab854c503b09eaa3302d0e3bc23d namespace=k8s.io Jan 29 12:45:34.191673 containerd[1470]: time="2025-01-29T12:45:34.191662341Z" level=warning msg="cleaning up after shim disconnected" id=85281ab516814325ea13d29a7d504c035686ab854c503b09eaa3302d0e3bc23d namespace=k8s.io Jan 29 12:45:34.191890 containerd[1470]: time="2025-01-29T12:45:34.191692789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:45:34.626213 sshd[4440]: Accepted publickey for core from 172.24.4.1 port 35670 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:34.629192 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:34.639701 systemd-logind[1448]: New session 28 of user core. Jan 29 12:45:34.648553 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 12:45:35.029765 containerd[1470]: time="2025-01-29T12:45:35.029649726Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:45:35.084780 containerd[1470]: time="2025-01-29T12:45:35.084635638Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f\"" Jan 29 12:45:35.087056 containerd[1470]: time="2025-01-29T12:45:35.086950206Z" level=info msg="StartContainer for \"99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f\"" Jan 29 12:45:35.139382 systemd[1]: Started cri-containerd-99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f.scope - libcontainer container 99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f. Jan 29 12:45:35.168278 systemd[1]: cri-containerd-99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f.scope: Deactivated successfully. Jan 29 12:45:35.171046 containerd[1470]: time="2025-01-29T12:45:35.170453528Z" level=info msg="StartContainer for \"99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f\" returns successfully" Jan 29 12:45:35.197756 containerd[1470]: time="2025-01-29T12:45:35.197661223Z" level=info msg="shim disconnected" id=99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f namespace=k8s.io Jan 29 12:45:35.197756 containerd[1470]: time="2025-01-29T12:45:35.197723843Z" level=warning msg="cleaning up after shim disconnected" id=99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f namespace=k8s.io Jan 29 12:45:35.197756 containerd[1470]: time="2025-01-29T12:45:35.197734984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:45:35.237832 sshd[4440]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:35.245734 systemd[1]: sshd@25-172.24.4.118:22-172.24.4.1:35670.service: Deactivated successfully. Jan 29 12:45:35.249764 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 12:45:35.252826 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Jan 29 12:45:35.255991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99b85f00b88a1ac6c32090fb1dc101b428ae34348f3496dd34e4eaebe76cc77f-rootfs.mount: Deactivated successfully. Jan 29 12:45:35.269412 systemd[1]: Started sshd@26-172.24.4.118:22-172.24.4.1:60898.service - OpenSSH per-connection server daemon (172.24.4.1:60898). Jan 29 12:45:35.270754 systemd-logind[1448]: Removed session 28. Jan 29 12:45:36.037503 containerd[1470]: time="2025-01-29T12:45:36.036482439Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:45:36.071297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858951599.mount: Deactivated successfully. Jan 29 12:45:36.092974 containerd[1470]: time="2025-01-29T12:45:36.091144750Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae\"" Jan 29 12:45:36.092974 containerd[1470]: time="2025-01-29T12:45:36.092207502Z" level=info msg="StartContainer for \"1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae\"" Jan 29 12:45:36.150389 systemd[1]: Started cri-containerd-1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae.scope - libcontainer container 1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae. Jan 29 12:45:36.175544 systemd[1]: cri-containerd-1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae.scope: Deactivated successfully. Jan 29 12:45:36.176346 containerd[1470]: time="2025-01-29T12:45:36.176212442Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d05946e_ae0c_495d_b518_4e97baeb2cb4.slice/cri-containerd-1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae.scope/memory.events\": no such file or directory" Jan 29 12:45:36.187512 containerd[1470]: time="2025-01-29T12:45:36.187403104Z" level=info msg="StartContainer for \"1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae\" returns successfully" Jan 29 12:45:36.220989 containerd[1470]: time="2025-01-29T12:45:36.220902661Z" level=info msg="shim disconnected" id=1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae namespace=k8s.io Jan 29 12:45:36.220989 containerd[1470]: time="2025-01-29T12:45:36.220979097Z" level=warning msg="cleaning up after shim disconnected" id=1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae namespace=k8s.io Jan 29 12:45:36.220989 containerd[1470]: time="2025-01-29T12:45:36.220990740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:45:36.232481 containerd[1470]: time="2025-01-29T12:45:36.232430919Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:45:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:45:36.254550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ea34308c806ab599c465efbd040789aa355d7e19a235ad967f2f1a991e0b7ae-rootfs.mount: Deactivated successfully. Jan 29 12:45:36.626111 sshd[4676]: Accepted publickey for core from 172.24.4.1 port 60898 ssh2: RSA SHA256:zxngcdanlyR0EKDkzlMhbKGtCUFY5H5rVeTzxavBToM Jan 29 12:45:36.629798 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:45:36.641017 systemd-logind[1448]: New session 29 of user core. Jan 29 12:45:36.654607 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 12:45:37.042323 containerd[1470]: time="2025-01-29T12:45:37.041881404Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:45:37.089132 containerd[1470]: time="2025-01-29T12:45:37.088268442Z" level=info msg="CreateContainer within sandbox \"a324959df6a56af4897596b31636bd753b388b637c96bc4c430fcb0618abf90f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6aa205a1f5084e77b169c638ee72bd0bfe131458cac746778e732d9d05080440\"" Jan 29 12:45:37.089132 containerd[1470]: time="2025-01-29T12:45:37.088755764Z" level=info msg="StartContainer for \"6aa205a1f5084e77b169c638ee72bd0bfe131458cac746778e732d9d05080440\"" Jan 29 12:45:37.089205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1173337642.mount: Deactivated successfully. Jan 29 12:45:37.142462 systemd[1]: Started cri-containerd-6aa205a1f5084e77b169c638ee72bd0bfe131458cac746778e732d9d05080440.scope - libcontainer container 6aa205a1f5084e77b169c638ee72bd0bfe131458cac746778e732d9d05080440. Jan 29 12:45:37.227202 containerd[1470]: time="2025-01-29T12:45:37.227127015Z" level=info msg="StartContainer for \"6aa205a1f5084e77b169c638ee72bd0bfe131458cac746778e732d9d05080440\" returns successfully" Jan 29 12:45:37.646378 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:45:37.696328 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 29 12:45:41.002985 systemd-networkd[1360]: lxc_health: Link UP Jan 29 12:45:41.019406 systemd-networkd[1360]: lxc_health: Gained carrier Jan 29 12:45:41.388046 kubelet[2692]: I0129 12:45:41.387921 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p65zw" podStartSLOduration=8.387903400999999 podStartE2EDuration="8.387903401s" podCreationTimestamp="2025-01-29 12:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:45:38.086704922 +0000 UTC m=+161.160139222" watchObservedRunningTime="2025-01-29 12:45:41.387903401 +0000 UTC m=+164.461337671" Jan 29 12:45:42.461424 systemd-networkd[1360]: lxc_health: Gained IPv6LL Jan 29 12:45:48.139987 systemd[1]: run-containerd-runc-k8s.io-6aa205a1f5084e77b169c638ee72bd0bfe131458cac746778e732d9d05080440-runc.7KvFjk.mount: Deactivated successfully. Jan 29 12:45:48.438004 sshd[4676]: pam_unix(sshd:session): session closed for user core Jan 29 12:45:48.446423 systemd[1]: sshd@26-172.24.4.118:22-172.24.4.1:60898.service: Deactivated successfully. Jan 29 12:45:48.450484 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 12:45:48.451993 systemd-logind[1448]: Session 29 logged out. Waiting for processes to exit. Jan 29 12:45:48.454228 systemd-logind[1448]: Removed session 29. Jan 29 12:45:57.455128 containerd[1470]: time="2025-01-29T12:45:57.455037677Z" level=info msg="StopPodSandbox for \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\"" Jan 29 12:45:57.455128 containerd[1470]: time="2025-01-29T12:45:57.455274598Z" level=info msg="TearDown network for sandbox \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\" successfully" Jan 29 12:45:57.457524 containerd[1470]: time="2025-01-29T12:45:57.455311569Z" level=info msg="StopPodSandbox for \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\" returns successfully" Jan 29 12:45:57.458626 containerd[1470]: time="2025-01-29T12:45:57.457813116Z" level=info msg="RemovePodSandbox for \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\"" Jan 29 12:45:57.458626 containerd[1470]: time="2025-01-29T12:45:57.457875094Z" level=info msg="Forcibly stopping sandbox \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\"" Jan 29 12:45:57.458626 containerd[1470]: time="2025-01-29T12:45:57.457989101Z" level=info msg="TearDown network for sandbox \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\" successfully" Jan 29 12:45:57.464970 containerd[1470]: time="2025-01-29T12:45:57.464839645Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:45:57.464970 containerd[1470]: time="2025-01-29T12:45:57.464957410Z" level=info msg="RemovePodSandbox \"dc3627a7ebcea30120ef9babb3bd92a393f1dc97d38c067230bea28e18b6ebb3\" returns successfully" Jan 29 12:45:57.465931 containerd[1470]: time="2025-01-29T12:45:57.465872784Z" level=info msg="StopPodSandbox for \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\"" Jan 29 12:45:57.466412 containerd[1470]: time="2025-01-29T12:45:57.466027208Z" level=info msg="TearDown network for sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" successfully" Jan 29 12:45:57.466412 containerd[1470]: time="2025-01-29T12:45:57.466055062Z" level=info msg="StopPodSandbox for \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" returns successfully" Jan 29 12:45:57.466859 containerd[1470]: time="2025-01-29T12:45:57.466785974Z" level=info msg="RemovePodSandbox for \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\"" Jan 29 12:45:57.466859 containerd[1470]: time="2025-01-29T12:45:57.466848955Z" level=info msg="Forcibly stopping sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\"" Jan 29 12:45:57.467017 containerd[1470]: time="2025-01-29T12:45:57.466954646Z" level=info msg="TearDown network for sandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" successfully" Jan 29 12:45:57.473038 containerd[1470]: time="2025-01-29T12:45:57.472942656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:45:57.473175 containerd[1470]: time="2025-01-29T12:45:57.473041204Z" level=info msg="RemovePodSandbox \"f00528e240de341071cbacbc354107d34a85062e050937c5f0c59e6d2fe08870\" returns successfully" Jan 29 12:46:02.486221 update_engine[1450]: I20250129 12:46:02.486109 1450 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 12:46:02.486221 update_engine[1450]: I20250129 12:46:02.486194 1450 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 12:46:02.487023 update_engine[1450]: I20250129 12:46:02.486557 1450 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 12:46:02.487488 update_engine[1450]: I20250129 12:46:02.487411 1450 omaha_request_params.cc:62] Current group set to lts Jan 29 12:46:02.489968 update_engine[1450]: I20250129 12:46:02.488343 1450 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 12:46:02.489968 update_engine[1450]: I20250129 12:46:02.488380 1450 update_attempter.cc:643] Scheduling an action processor start. Jan 29 12:46:02.489968 update_engine[1450]: I20250129 12:46:02.488461 1450 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:46:02.489968 update_engine[1450]: I20250129 12:46:02.488534 1450 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 12:46:02.490354 locksmithd[1480]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 12:46:02.492087 update_engine[1450]: I20250129 12:46:02.491082 1450 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:46:02.492087 update_engine[1450]: I20250129 12:46:02.491124 1450 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Jan 29 12:46:02.492087 update_engine[1450]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Jan 29 12:46:02.492087 update_engine[1450]: <os version="Chateau" platform="CoreOS" sp="4081.3.0_x86_64"></os> Jan 29 12:46:02.492087 update_engine[1450]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.3.0" track="lts" bootid="{6e3ea135-9367-41a5-aa5e-77421f701937}" oem="openstack" oemversion="0" alephversion="4081.3.0" machineid="915d3bc888794423b76aa0cff75d46ac" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Jan 29 12:46:02.492087 update_engine[1450]: <ping active="1"></ping> Jan 29 12:46:02.492087 update_engine[1450]: <updatecheck></updatecheck> Jan 29 12:46:02.492087 update_engine[1450]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Jan 29 12:46:02.492087 update_engine[1450]: </app> Jan 29 12:46:02.492087 update_engine[1450]: </request> Jan 29 12:46:02.492087 update_engine[1450]: I20250129 12:46:02.491139 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:46:02.494960 update_engine[1450]: I20250129 12:46:02.494897 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:46:02.495562 update_engine[1450]: I20250129 12:46:02.495496 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:46:02.512406 update_engine[1450]: E20250129 12:46:02.512220 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:46:02.512658 update_engine[1450]: I20250129 12:46:02.512470 1450 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 12:46:12.495596 update_engine[1450]: I20250129 12:46:12.495424 1450 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:46:12.496379 update_engine[1450]: I20250129 12:46:12.495841 1450 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:46:12.496882 update_engine[1450]: I20250129 12:46:12.496802 1450 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:46:12.507394 update_engine[1450]: E20250129 12:46:12.507167 1450 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:46:12.507394 update_engine[1450]: I20250129 12:46:12.507333 1450 libcurl_http_fetcher.cc:283] No HTTP response, retry 2