Dec 13 02:39:03.030771 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 02:39:03.030811 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:39:03.030823 kernel: BIOS-provided physical RAM map: Dec 13 02:39:03.030831 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:39:03.030838 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:39:03.030845 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:39:03.030854 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 02:39:03.030862 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 02:39:03.030869 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:39:03.030878 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:39:03.030886 kernel: NX (Execute Disable) protection: active Dec 13 02:39:03.030893 kernel: APIC: Static calls initialized Dec 13 02:39:03.030901 kernel: SMBIOS 2.8 present. Dec 13 02:39:03.030909 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 02:39:03.030918 kernel: Hypervisor detected: KVM Dec 13 02:39:03.030928 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:39:03.030936 kernel: kvm-clock: using sched offset of 4789949336 cycles Dec 13 02:39:03.030945 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:39:03.030953 kernel: tsc: Detected 1996.249 MHz processor Dec 13 02:39:03.030961 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:39:03.030970 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:39:03.030978 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 02:39:03.030986 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 02:39:03.030994 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:39:03.031004 kernel: ACPI: Early table checksum verification disabled Dec 13 02:39:03.031012 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 02:39:03.031020 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:39:03.031029 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:39:03.031037 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:39:03.031045 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 02:39:03.031053 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:39:03.031061 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:39:03.031069 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 02:39:03.031079 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 02:39:03.031087 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 02:39:03.031095 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 02:39:03.031103 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 02:39:03.031111 kernel: No NUMA configuration found Dec 13 02:39:03.031119 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 02:39:03.031127 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 02:39:03.031138 kernel: Zone ranges: Dec 13 02:39:03.031149 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:39:03.031157 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 02:39:03.031165 kernel: Normal empty Dec 13 02:39:03.031174 kernel: Movable zone start for each node Dec 13 02:39:03.031182 kernel: Early memory node ranges Dec 13 02:39:03.031190 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:39:03.031201 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 02:39:03.031209 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 02:39:03.031218 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:39:03.031226 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:39:03.031234 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 02:39:03.031243 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:39:03.031251 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:39:03.031273 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:39:03.031281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:39:03.031292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:39:03.031301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:39:03.031309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:39:03.031317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:39:03.031326 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:39:03.031334 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:39:03.031939 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 02:39:03.031989 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 02:39:03.031997 kernel: Booting paravirtualized kernel on KVM Dec 13 02:39:03.032006 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:39:03.032019 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:39:03.032028 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 02:39:03.032037 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 02:39:03.032045 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:39:03.032053 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 02:39:03.032072 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:39:03.032081 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:39:03.032096 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:39:03.032105 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:39:03.032113 kernel: Fallback order for Node 0: 0 Dec 13 02:39:03.032122 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 02:39:03.032130 kernel: Policy zone: DMA32 Dec 13 02:39:03.032139 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:39:03.032148 kernel: Memory: 1971212K/2096620K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 02:39:03.032156 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:39:03.032165 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 02:39:03.032175 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 02:39:03.032184 kernel: Dynamic Preempt: voluntary Dec 13 02:39:03.032192 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 02:39:03.032210 kernel: rcu: RCU event tracing is enabled. Dec 13 02:39:03.032219 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:39:03.032228 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 02:39:03.032237 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:39:03.032246 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:39:03.032254 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:39:03.032263 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:39:03.032273 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:39:03.032282 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 02:39:03.032291 kernel: Console: colour VGA+ 80x25 Dec 13 02:39:03.032299 kernel: printk: console [tty0] enabled Dec 13 02:39:03.032308 kernel: printk: console [ttyS0] enabled Dec 13 02:39:03.032316 kernel: ACPI: Core revision 20230628 Dec 13 02:39:03.032325 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:39:03.032334 kernel: x2apic enabled Dec 13 02:39:03.032342 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 02:39:03.032353 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:39:03.032361 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:39:03.032370 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 02:39:03.032379 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 02:39:03.032387 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 02:39:03.032396 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:39:03.032405 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:39:03.032413 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:39:03.032422 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:39:03.032432 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:39:03.032441 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 02:39:03.032449 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:39:03.032457 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:39:03.032466 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 02:39:03.032474 kernel: landlock: Up and running. Dec 13 02:39:03.032482 kernel: SELinux: Initializing. Dec 13 02:39:03.032491 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:39:03.032506 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:39:03.032515 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 02:39:03.032525 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:39:03.032535 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:39:03.032544 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:39:03.032553 kernel: Performance Events: AMD PMU driver. Dec 13 02:39:03.032562 kernel: ... version: 0 Dec 13 02:39:03.032571 kernel: ... bit width: 48 Dec 13 02:39:03.032581 kernel: ... generic registers: 4 Dec 13 02:39:03.032590 kernel: ... value mask: 0000ffffffffffff Dec 13 02:39:03.032599 kernel: ... max period: 00007fffffffffff Dec 13 02:39:03.032620 kernel: ... fixed-purpose events: 0 Dec 13 02:39:03.032629 kernel: ... event mask: 000000000000000f Dec 13 02:39:03.032638 kernel: signal: max sigframe size: 1440 Dec 13 02:39:03.032648 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:39:03.032657 kernel: rcu: Max phase no-delay instances is 400. Dec 13 02:39:03.032666 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:39:03.032677 kernel: smpboot: x86: Booting SMP configuration: Dec 13 02:39:03.032686 kernel: .... node #0, CPUs: #1 Dec 13 02:39:03.032695 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:39:03.032704 kernel: smpboot: Max logical packages: 2 Dec 13 02:39:03.032714 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 02:39:03.032724 kernel: devtmpfs: initialized Dec 13 02:39:03.032734 kernel: x86/mm: Memory block size: 128MB Dec 13 02:39:03.032744 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:39:03.032754 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:39:03.032763 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:39:03.032775 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:39:03.032784 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:39:03.032794 kernel: audit: type=2000 audit(1734057541.613:1): state=initialized audit_enabled=0 res=1 Dec 13 02:39:03.032803 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:39:03.032813 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:39:03.032822 kernel: cpuidle: using governor menu Dec 13 02:39:03.032832 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:39:03.032841 kernel: dca service started, version 1.12.1 Dec 13 02:39:03.032851 kernel: PCI: Using configuration type 1 for base access Dec 13 02:39:03.032863 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:39:03.032872 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:39:03.032882 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 02:39:03.032891 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:39:03.032901 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:39:03.032910 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:39:03.032920 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:39:03.032929 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:39:03.032939 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 02:39:03.032950 kernel: ACPI: Interpreter enabled Dec 13 02:39:03.032959 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:39:03.032968 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:39:03.032978 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:39:03.032988 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 02:39:03.032997 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 02:39:03.033007 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:39:03.034676 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:39:03.034797 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 02:39:03.034892 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 02:39:03.034906 kernel: acpiphp: Slot [3] registered Dec 13 02:39:03.034915 kernel: acpiphp: Slot [4] registered Dec 13 02:39:03.034924 kernel: acpiphp: Slot [5] registered Dec 13 02:39:03.034933 kernel: acpiphp: Slot [6] registered Dec 13 02:39:03.034942 kernel: acpiphp: Slot [7] registered Dec 13 02:39:03.034950 kernel: acpiphp: Slot [8] registered Dec 13 02:39:03.034963 kernel: acpiphp: Slot [9] registered Dec 13 02:39:03.034971 kernel: acpiphp: Slot [10] registered Dec 13 02:39:03.034980 kernel: acpiphp: Slot [11] registered Dec 13 02:39:03.034989 kernel: acpiphp: Slot [12] registered Dec 13 02:39:03.034998 kernel: acpiphp: Slot [13] registered Dec 13 02:39:03.035006 kernel: acpiphp: Slot [14] registered Dec 13 02:39:03.035015 kernel: acpiphp: Slot [15] registered Dec 13 02:39:03.035024 kernel: acpiphp: Slot [16] registered Dec 13 02:39:03.035032 kernel: acpiphp: Slot [17] registered Dec 13 02:39:03.035043 kernel: acpiphp: Slot [18] registered Dec 13 02:39:03.035052 kernel: acpiphp: Slot [19] registered Dec 13 02:39:03.035060 kernel: acpiphp: Slot [20] registered Dec 13 02:39:03.035069 kernel: acpiphp: Slot [21] registered Dec 13 02:39:03.035077 kernel: acpiphp: Slot [22] registered Dec 13 02:39:03.035086 kernel: acpiphp: Slot [23] registered Dec 13 02:39:03.035095 kernel: acpiphp: Slot [24] registered Dec 13 02:39:03.035103 kernel: acpiphp: Slot [25] registered Dec 13 02:39:03.035112 kernel: acpiphp: Slot [26] registered Dec 13 02:39:03.035121 kernel: acpiphp: Slot [27] registered Dec 13 02:39:03.035131 kernel: acpiphp: Slot [28] registered Dec 13 02:39:03.035140 kernel: acpiphp: Slot [29] registered Dec 13 02:39:03.035149 kernel: acpiphp: Slot [30] registered Dec 13 02:39:03.035157 kernel: acpiphp: Slot [31] registered Dec 13 02:39:03.035166 kernel: PCI host bridge to bus 0000:00 Dec 13 02:39:03.035264 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:39:03.035369 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:39:03.035465 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:39:03.035555 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:39:03.037687 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 02:39:03.037777 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:39:03.037894 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:39:03.038053 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:39:03.038167 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 02:39:03.038265 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 02:39:03.038356 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 02:39:03.038446 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 02:39:03.038535 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 02:39:03.038696 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 02:39:03.038798 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:39:03.038920 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 02:39:03.039759 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 02:39:03.039868 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 02:39:03.039962 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 02:39:03.040055 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 02:39:03.040148 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 02:39:03.040241 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 02:39:03.040335 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:39:03.040443 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:39:03.040539 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 02:39:03.041697 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 02:39:03.041794 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 02:39:03.041883 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 02:39:03.041984 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 02:39:03.042083 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 02:39:03.042173 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 02:39:03.042261 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 02:39:03.042357 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 02:39:03.042447 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 02:39:03.042535 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 02:39:03.043776 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:39:03.043888 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 02:39:03.043986 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 02:39:03.044001 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:39:03.044011 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:39:03.044021 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:39:03.044031 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:39:03.044041 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:39:03.044051 kernel: iommu: Default domain type: Translated Dec 13 02:39:03.044061 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:39:03.044075 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:39:03.044085 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:39:03.044095 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:39:03.044105 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 02:39:03.044201 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 02:39:03.044295 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 02:39:03.044384 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:39:03.044397 kernel: vgaarb: loaded Dec 13 02:39:03.044410 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:39:03.044419 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:39:03.044428 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:39:03.044437 kernel: pnp: PnP ACPI init Dec 13 02:39:03.044526 kernel: pnp 00:03: [dma 2] Dec 13 02:39:03.044540 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:39:03.044549 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:39:03.044558 kernel: NET: Registered PF_INET protocol family Dec 13 02:39:03.044567 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:39:03.044580 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:39:03.044589 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:39:03.044598 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:39:03.044623 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:39:03.044632 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:39:03.044641 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:39:03.044650 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:39:03.044659 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:39:03.044668 kernel: NET: Registered PF_XDP protocol family Dec 13 02:39:03.044755 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:39:03.044834 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:39:03.044913 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:39:03.044990 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:39:03.045068 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 02:39:03.045160 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 02:39:03.045254 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:39:03.045271 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:39:03.045280 kernel: Initialise system trusted keyrings Dec 13 02:39:03.045289 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:39:03.045298 kernel: Key type asymmetric registered Dec 13 02:39:03.045307 kernel: Asymmetric key parser 'x509' registered Dec 13 02:39:03.045328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 02:39:03.045337 kernel: io scheduler mq-deadline registered Dec 13 02:39:03.045346 kernel: io scheduler kyber registered Dec 13 02:39:03.045355 kernel: io scheduler bfq registered Dec 13 02:39:03.045366 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:39:03.045376 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 02:39:03.045385 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:39:03.045394 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:39:03.045403 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:39:03.045412 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:39:03.045421 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:39:03.045430 kernel: random: crng init done Dec 13 02:39:03.045439 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:39:03.045449 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:39:03.045458 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:39:03.045560 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 02:39:03.045575 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:39:03.046464 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 02:39:03.046565 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T02:39:02 UTC (1734057542) Dec 13 02:39:03.046670 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 02:39:03.046685 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 02:39:03.046698 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:39:03.046707 kernel: Segment Routing with IPv6 Dec 13 02:39:03.046716 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:39:03.046725 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:39:03.046733 kernel: Key type dns_resolver registered Dec 13 02:39:03.046742 kernel: IPI shorthand broadcast: enabled Dec 13 02:39:03.046751 kernel: sched_clock: Marking stable (948011546, 128952128)->(1080107852, -3144178) Dec 13 02:39:03.046760 kernel: registered taskstats version 1 Dec 13 02:39:03.046769 kernel: Loading compiled-in X.509 certificates Dec 13 02:39:03.046780 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 02:39:03.046789 kernel: Key type .fscrypt registered Dec 13 02:39:03.046798 kernel: Key type fscrypt-provisioning registered Dec 13 02:39:03.046807 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:39:03.046816 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:39:03.046825 kernel: ima: No architecture policies found Dec 13 02:39:03.046833 kernel: clk: Disabling unused clocks Dec 13 02:39:03.046842 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 02:39:03.046851 kernel: Write protecting the kernel read-only data: 36864k Dec 13 02:39:03.046862 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 02:39:03.046871 kernel: Run /init as init process Dec 13 02:39:03.046880 kernel: with arguments: Dec 13 02:39:03.046889 kernel: /init Dec 13 02:39:03.046897 kernel: with environment: Dec 13 02:39:03.046906 kernel: HOME=/ Dec 13 02:39:03.046915 kernel: TERM=linux Dec 13 02:39:03.046923 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:39:03.046941 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:39:03.046955 systemd[1]: Detected virtualization kvm. Dec 13 02:39:03.046965 systemd[1]: Detected architecture x86-64. Dec 13 02:39:03.046974 systemd[1]: Running in initrd. Dec 13 02:39:03.046984 systemd[1]: No hostname configured, using default hostname. Dec 13 02:39:03.046993 systemd[1]: Hostname set to . Dec 13 02:39:03.047003 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:39:03.047013 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:39:03.047025 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:39:03.047043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:39:03.047054 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 02:39:03.047064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:39:03.047074 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 02:39:03.047084 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 02:39:03.047095 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 02:39:03.047107 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 02:39:03.047117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:39:03.047127 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:39:03.047137 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:39:03.047156 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:39:03.047168 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:39:03.047180 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:39:03.047189 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:39:03.047199 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:39:03.047210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 02:39:03.047220 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 02:39:03.047230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:39:03.047240 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:39:03.047250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:39:03.047261 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:39:03.047271 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 02:39:03.047281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:39:03.047291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 02:39:03.047301 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:39:03.047311 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:39:03.047321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:39:03.047331 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:39:03.047341 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 02:39:03.047353 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:39:03.047401 systemd-journald[185]: Collecting audit messages is disabled. Dec 13 02:39:03.047425 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:39:03.047439 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:39:03.047449 systemd-journald[185]: Journal started Dec 13 02:39:03.047475 systemd-journald[185]: Runtime Journal (/run/log/journal/128a78b7cba94b1fadca9b7b03f96146) is 4.9M, max 39.3M, 34.4M free. Dec 13 02:39:03.036814 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:39:03.090654 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:39:03.090685 kernel: Bridge firewalling registered Dec 13 02:39:03.090698 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:39:03.070904 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:39:03.092104 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:39:03.094109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:39:03.102820 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:39:03.105791 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:39:03.107733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:39:03.110182 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:39:03.120845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:39:03.122325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:39:03.131695 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:39:03.132469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:39:03.133744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:39:03.144722 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 02:39:03.148728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:39:03.156289 dracut-cmdline[217]: dracut-dracut-053 Dec 13 02:39:03.159071 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 02:39:03.186864 systemd-resolved[220]: Positive Trust Anchors: Dec 13 02:39:03.186895 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:39:03.186935 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:39:03.190183 systemd-resolved[220]: Defaulting to hostname 'linux'. Dec 13 02:39:03.191361 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:39:03.192941 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:39:03.236634 kernel: SCSI subsystem initialized Dec 13 02:39:03.246659 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:39:03.258731 kernel: iscsi: registered transport (tcp) Dec 13 02:39:03.281033 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:39:03.281108 kernel: QLogic iSCSI HBA Driver Dec 13 02:39:03.335343 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 02:39:03.342837 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 02:39:03.407156 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:39:03.407268 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:39:03.409984 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 02:39:03.474716 kernel: raid6: sse2x4 gen() 5210 MB/s Dec 13 02:39:03.491655 kernel: raid6: sse2x2 gen() 6694 MB/s Dec 13 02:39:03.508799 kernel: raid6: sse2x1 gen() 10047 MB/s Dec 13 02:39:03.508892 kernel: raid6: using algorithm sse2x1 gen() 10047 MB/s Dec 13 02:39:03.526933 kernel: raid6: .... xor() 7015 MB/s, rmw enabled Dec 13 02:39:03.527009 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 02:39:03.550219 kernel: xor: measuring software checksum speed Dec 13 02:39:03.550285 kernel: prefetch64-sse : 17066 MB/sec Dec 13 02:39:03.550717 kernel: generic_sse : 15486 MB/sec Dec 13 02:39:03.552357 kernel: xor: using function: prefetch64-sse (17066 MB/sec) Dec 13 02:39:03.741716 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 02:39:03.759123 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:39:03.765048 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:39:03.780595 systemd-udevd[403]: Using default interface naming scheme 'v255'. Dec 13 02:39:03.785157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:39:03.796958 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 02:39:03.816922 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Dec 13 02:39:03.863775 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:39:03.874899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:39:03.918224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:39:03.931459 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 02:39:03.978518 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 02:39:03.979598 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:39:03.981839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:39:03.983489 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:39:03.991855 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 02:39:04.006731 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Dec 13 02:39:04.164997 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 02:39:04.165281 kernel: libata version 3.00 loaded. Dec 13 02:39:04.165340 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 02:39:04.165688 kernel: scsi host0: ata_piix Dec 13 02:39:04.165982 kernel: scsi host1: ata_piix Dec 13 02:39:04.166254 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 02:39:04.166290 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 02:39:04.166319 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:39:04.166348 kernel: GPT:17805311 != 41943039 Dec 13 02:39:04.166375 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:39:04.166403 kernel: GPT:17805311 != 41943039 Dec 13 02:39:04.166429 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:39:04.166456 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:39:04.014179 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:39:04.037084 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:39:04.037227 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:39:04.037921 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:39:04.038402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:39:04.038520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:39:04.039147 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:39:04.051927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:39:04.106416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:39:04.112906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:39:04.134696 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:39:04.255187 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Dec 13 02:39:04.266665 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (459) Dec 13 02:39:04.304999 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 02:39:04.311457 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 02:39:04.315980 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 02:39:04.316586 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 02:39:04.322907 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 02:39:04.327729 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 02:39:04.341169 disk-uuid[508]: Primary Header is updated. Dec 13 02:39:04.341169 disk-uuid[508]: Secondary Entries is updated. Dec 13 02:39:04.341169 disk-uuid[508]: Secondary Header is updated. Dec 13 02:39:04.350634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:39:04.355634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:39:05.368764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:39:05.369784 disk-uuid[509]: The operation has completed successfully. Dec 13 02:39:05.451039 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:39:05.451264 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 02:39:05.471746 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 02:39:05.490113 sh[522]: Success Dec 13 02:39:05.510638 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 02:39:05.585165 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 02:39:05.586552 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 02:39:05.588289 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 02:39:05.611027 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 02:39:05.611103 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:39:05.613647 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 02:39:05.613691 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 02:39:05.614869 kernel: BTRFS info (device dm-0): using free space tree Dec 13 02:39:05.632021 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 02:39:05.634384 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 02:39:05.640884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 02:39:05.651900 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 02:39:05.674289 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:39:05.674387 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:39:05.677650 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:39:05.684861 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:39:05.705012 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:39:05.709781 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:39:05.723667 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 02:39:05.732951 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 02:39:05.766244 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:39:05.773356 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:39:05.796156 systemd-networkd[704]: lo: Link UP Dec 13 02:39:05.796167 systemd-networkd[704]: lo: Gained carrier Dec 13 02:39:05.797424 systemd-networkd[704]: Enumeration completed Dec 13 02:39:05.797503 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:39:05.798336 systemd-networkd[704]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:39:05.798340 systemd-networkd[704]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:39:05.800084 systemd-networkd[704]: eth0: Link UP Dec 13 02:39:05.800088 systemd-networkd[704]: eth0: Gained carrier Dec 13 02:39:05.800099 systemd-networkd[704]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:39:05.800230 systemd[1]: Reached target network.target - Network. Dec 13 02:39:05.820890 systemd-networkd[704]: eth0: DHCPv4 address 172.24.4.28/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 02:39:05.873340 ignition[648]: Ignition 2.19.0 Dec 13 02:39:05.873353 ignition[648]: Stage: fetch-offline Dec 13 02:39:05.873391 ignition[648]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:39:05.873400 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:39:05.877273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:39:05.873503 ignition[648]: parsed url from cmdline: "" Dec 13 02:39:05.877974 systemd-resolved[220]: Detected conflict on linux IN A 172.24.4.28 Dec 13 02:39:05.873507 ignition[648]: no config URL provided Dec 13 02:39:05.877992 systemd-resolved[220]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Dec 13 02:39:05.873513 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:39:05.873521 ignition[648]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:39:05.873526 ignition[648]: failed to fetch config: resource requires networking Dec 13 02:39:05.874691 ignition[648]: Ignition finished successfully Dec 13 02:39:05.883807 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 02:39:05.897151 ignition[714]: Ignition 2.19.0 Dec 13 02:39:05.897164 ignition[714]: Stage: fetch Dec 13 02:39:05.897353 ignition[714]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:39:05.897364 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:39:05.897457 ignition[714]: parsed url from cmdline: "" Dec 13 02:39:05.897460 ignition[714]: no config URL provided Dec 13 02:39:05.897466 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:39:05.897475 ignition[714]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:39:05.897594 ignition[714]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 02:39:05.897702 ignition[714]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 02:39:05.897727 ignition[714]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 02:39:06.256266 ignition[714]: GET result: OK Dec 13 02:39:06.256432 ignition[714]: parsing config with SHA512: d821974b1136fe69fb1c5ea656571e0c18ddfa09fb0d16b00ae321a085f2c7346037a932efe2b6e4dcecd9b180dbcee5116ec79ee3664bb68bb18defa1f49e2e Dec 13 02:39:06.266842 unknown[714]: fetched base config from "system" Dec 13 02:39:06.266869 unknown[714]: fetched base config from "system" Dec 13 02:39:06.267918 ignition[714]: fetch: fetch complete Dec 13 02:39:06.266897 unknown[714]: fetched user config from "openstack" Dec 13 02:39:06.267930 ignition[714]: fetch: fetch passed Dec 13 02:39:06.268017 ignition[714]: Ignition finished successfully Dec 13 02:39:06.272190 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 02:39:06.279921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 02:39:06.322496 ignition[720]: Ignition 2.19.0 Dec 13 02:39:06.324067 ignition[720]: Stage: kargs Dec 13 02:39:06.324463 ignition[720]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:39:06.324489 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:39:06.326906 ignition[720]: kargs: kargs passed Dec 13 02:39:06.329196 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 02:39:06.327003 ignition[720]: Ignition finished successfully Dec 13 02:39:06.338922 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 02:39:06.367952 ignition[726]: Ignition 2.19.0 Dec 13 02:39:06.369503 ignition[726]: Stage: disks Dec 13 02:39:06.369960 ignition[726]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:39:06.369987 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:39:06.372396 ignition[726]: disks: disks passed Dec 13 02:39:06.372532 ignition[726]: Ignition finished successfully Dec 13 02:39:06.373772 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 02:39:06.376203 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 02:39:06.378136 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 02:39:06.380130 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:39:06.381867 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:39:06.383680 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:39:06.392870 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 02:39:06.622140 systemd-fsck[735]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 02:39:06.637696 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 02:39:06.648028 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 02:39:06.808665 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 02:39:06.809078 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 02:39:06.810145 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 02:39:06.821675 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:39:06.826363 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 02:39:06.827232 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 02:39:06.829995 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 02:39:06.831981 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:39:06.832009 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:39:06.857283 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (743) Dec 13 02:39:06.857352 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:39:06.857384 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:39:06.857413 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:39:06.859445 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 02:39:06.862914 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:39:06.871885 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 02:39:06.876940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:39:06.993017 initrd-setup-root[771]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:39:07.007791 initrd-setup-root[778]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:39:07.014441 initrd-setup-root[785]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:39:07.022089 initrd-setup-root[792]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:39:07.022961 systemd-networkd[704]: eth0: Gained IPv6LL Dec 13 02:39:07.132151 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 02:39:07.138737 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 02:39:07.140933 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 02:39:07.150915 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 02:39:07.154044 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:39:07.186054 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 02:39:07.189465 ignition[859]: INFO : Ignition 2.19.0 Dec 13 02:39:07.189465 ignition[859]: INFO : Stage: mount Dec 13 02:39:07.192240 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:39:07.192240 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:39:07.192240 ignition[859]: INFO : mount: mount passed Dec 13 02:39:07.192240 ignition[859]: INFO : Ignition finished successfully Dec 13 02:39:07.192468 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 02:39:14.088770 coreos-metadata[745]: Dec 13 02:39:14.087 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:39:14.136008 coreos-metadata[745]: Dec 13 02:39:14.135 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 02:39:14.153347 coreos-metadata[745]: Dec 13 02:39:14.153 INFO Fetch successful Dec 13 02:39:14.155908 coreos-metadata[745]: Dec 13 02:39:14.155 INFO wrote hostname ci-4081-2-1-7-a50b4b34f3.novalocal to /sysroot/etc/hostname Dec 13 02:39:14.159945 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 02:39:14.160492 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 02:39:14.172807 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 02:39:14.209962 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:39:14.241727 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (877) Dec 13 02:39:14.250719 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 02:39:14.250815 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:39:14.250845 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:39:14.259717 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 02:39:14.264502 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:39:14.308235 ignition[895]: INFO : Ignition 2.19.0 Dec 13 02:39:14.308235 ignition[895]: INFO : Stage: files Dec 13 02:39:14.311303 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:39:14.311303 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:39:14.311303 ignition[895]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:39:14.317029 ignition[895]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:39:14.317029 ignition[895]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:39:14.323706 ignition[895]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:39:14.326238 ignition[895]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:39:14.326238 ignition[895]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:39:14.324593 unknown[895]: wrote ssh authorized keys file for user: core Dec 13 02:39:14.332963 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:39:14.332963 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:39:14.399753 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:39:14.704095 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:39:14.704095 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:39:14.704095 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:39:15.233765 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 02:39:15.686510 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:39:15.686510 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:39:15.691223 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:39:16.034103 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 02:39:17.724552 ignition[895]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:39:17.724552 ignition[895]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 02:39:17.790930 ignition[895]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:39:17.795166 ignition[895]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:39:17.795166 ignition[895]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 02:39:17.795166 ignition[895]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:39:17.795166 ignition[895]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:39:17.795166 ignition[895]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:39:17.795166 ignition[895]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:39:17.795166 ignition[895]: INFO : files: files passed Dec 13 02:39:17.795166 ignition[895]: INFO : Ignition finished successfully Dec 13 02:39:17.796592 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 02:39:17.810028 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 02:39:17.815911 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 02:39:17.839297 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:39:17.839496 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 02:39:17.863992 initrd-setup-root-after-ignition[923]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:39:17.863992 initrd-setup-root-after-ignition[923]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:39:17.868681 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:39:17.869424 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:39:17.872296 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 02:39:17.885484 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 02:39:17.935713 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:39:17.935968 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 02:39:17.947733 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 02:39:17.950277 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 02:39:17.953021 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 02:39:17.960923 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 02:39:17.995928 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:39:18.003894 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 02:39:18.037379 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:39:18.039081 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:39:18.041992 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 02:39:18.044582 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:39:18.044916 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:39:18.047928 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 02:39:18.049862 systemd[1]: Stopped target basic.target - Basic System. Dec 13 02:39:18.052453 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 02:39:18.055513 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:39:18.057880 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 02:39:18.060506 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 02:39:18.063252 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:39:18.066108 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 02:39:18.068736 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 02:39:18.071465 systemd[1]: Stopped target swap.target - Swaps. Dec 13 02:39:18.073933 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:39:18.074219 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:39:18.077083 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:39:18.078832 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:39:18.081168 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 02:39:18.083529 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:39:18.085712 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:39:18.086108 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 02:39:18.089090 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:39:18.089433 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:39:18.091067 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:39:18.091328 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 02:39:18.102177 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 02:39:18.103213 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:39:18.103555 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:39:18.109001 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 02:39:18.111681 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:39:18.111947 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:39:18.116695 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:39:18.117640 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:39:18.124672 ignition[947]: INFO : Ignition 2.19.0 Dec 13 02:39:18.124672 ignition[947]: INFO : Stage: umount Dec 13 02:39:18.124672 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:39:18.124672 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 02:39:18.123191 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:39:18.130595 ignition[947]: INFO : umount: umount passed Dec 13 02:39:18.130595 ignition[947]: INFO : Ignition finished successfully Dec 13 02:39:18.123270 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 02:39:18.126889 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:39:18.126974 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 02:39:18.129451 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:39:18.129525 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 02:39:18.131721 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:39:18.131762 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 02:39:18.132226 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:39:18.132264 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 02:39:18.132744 systemd[1]: Stopped target network.target - Network. Dec 13 02:39:18.133151 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:39:18.133192 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:39:18.135731 systemd[1]: Stopped target paths.target - Path Units. Dec 13 02:39:18.136378 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:39:18.136835 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:39:18.137519 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 02:39:18.137949 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 02:39:18.138414 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:39:18.138447 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:39:18.140724 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:39:18.140756 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:39:18.141466 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:39:18.141505 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 02:39:18.141977 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 02:39:18.142014 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 02:39:18.142613 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 02:39:18.143806 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 02:39:18.148700 systemd-networkd[704]: eth0: DHCPv6 lease lost Dec 13 02:39:18.154034 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:39:18.154315 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 02:39:18.156519 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:39:18.156787 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 02:39:18.161876 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:39:18.162014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:39:18.169794 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 02:39:18.171078 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:39:18.171194 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:39:18.172914 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:39:18.173010 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:39:18.175498 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:39:18.175596 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 02:39:18.185155 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 02:39:18.185305 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:39:18.187189 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:39:18.203679 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:39:18.205780 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:39:18.206083 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:39:18.210582 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:39:18.210729 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 02:39:18.212183 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:39:18.212258 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:39:18.213950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:39:18.214028 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:39:18.215716 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:39:18.215756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 02:39:18.216788 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:39:18.216829 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:39:18.222817 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 02:39:18.224894 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:39:18.224946 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:39:18.226139 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:39:18.226183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:39:18.228430 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:39:18.228523 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 02:39:18.231388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:39:18.231477 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 02:39:18.263081 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:39:18.263195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 02:39:18.265082 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 02:39:18.266279 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:39:18.266335 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 02:39:18.273872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 02:39:18.301089 systemd[1]: Switching root. Dec 13 02:39:18.378153 systemd-journald[185]: Journal stopped Dec 13 02:39:20.809803 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:39:20.809859 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:39:20.809877 kernel: SELinux: policy capability open_perms=1 Dec 13 02:39:20.809890 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:39:20.809902 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:39:20.809914 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:39:20.809925 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:39:20.809936 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:39:20.809952 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:39:20.809964 kernel: audit: type=1403 audit(1734057559.563:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:39:20.809976 systemd[1]: Successfully loaded SELinux policy in 85.433ms. Dec 13 02:39:20.809998 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.746ms. Dec 13 02:39:20.810012 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:39:20.810026 systemd[1]: Detected virtualization kvm. Dec 13 02:39:20.810038 systemd[1]: Detected architecture x86-64. Dec 13 02:39:20.810053 systemd[1]: Detected first boot. Dec 13 02:39:20.810068 systemd[1]: Hostname set to . Dec 13 02:39:20.810081 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:39:20.810093 zram_generator::config[989]: No configuration found. Dec 13 02:39:20.810110 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:39:20.810123 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:39:20.810135 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 02:39:20.810148 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:39:20.810161 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 02:39:20.810176 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 02:39:20.810189 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 02:39:20.810201 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 02:39:20.810214 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 02:39:20.810227 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 02:39:20.810239 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 02:39:20.810252 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 02:39:20.810264 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:39:20.810278 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:39:20.810293 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 02:39:20.810306 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 02:39:20.810318 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 02:39:20.810331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:39:20.810344 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 02:39:20.810356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:39:20.810369 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 02:39:20.810384 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 02:39:20.810396 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 02:39:20.810412 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 02:39:20.810425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:39:20.810437 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:39:20.810450 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:39:20.810463 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:39:20.810479 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 02:39:20.810494 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 02:39:20.810507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:39:20.810519 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:39:20.810532 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:39:20.810544 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 02:39:20.810557 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 02:39:20.810569 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 02:39:20.810581 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 02:39:20.810594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:39:20.813957 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 02:39:20.813975 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 02:39:20.813988 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 02:39:20.814002 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:39:20.814014 systemd[1]: Reached target machines.target - Containers. Dec 13 02:39:20.814027 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 02:39:20.814039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:39:20.814052 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:39:20.814069 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 02:39:20.814081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:39:20.814094 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:39:20.814106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:39:20.814118 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 02:39:20.814131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:39:20.814144 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:39:20.814157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:39:20.814170 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 02:39:20.814185 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:39:20.814197 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:39:20.814210 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:39:20.814223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:39:20.814236 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 02:39:20.814248 kernel: fuse: init (API version 7.39) Dec 13 02:39:20.814260 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 02:39:20.814272 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:39:20.814285 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:39:20.814300 systemd[1]: Stopped verity-setup.service. Dec 13 02:39:20.814313 kernel: ACPI: bus type drm_connector registered Dec 13 02:39:20.814325 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:39:20.814338 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 02:39:20.814351 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 02:39:20.814363 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 02:39:20.814375 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 02:39:20.814389 kernel: loop: module loaded Dec 13 02:39:20.814401 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 02:39:20.814413 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 02:39:20.814426 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:39:20.814438 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:39:20.814451 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 02:39:20.814465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:39:20.814478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:39:20.814490 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:39:20.814503 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:39:20.814518 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 02:39:20.814531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:39:20.814563 systemd-journald[1085]: Collecting audit messages is disabled. Dec 13 02:39:20.814587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:39:20.814600 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:39:20.814637 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 02:39:20.814650 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:39:20.814664 systemd-journald[1085]: Journal started Dec 13 02:39:20.814695 systemd-journald[1085]: Runtime Journal (/run/log/journal/128a78b7cba94b1fadca9b7b03f96146) is 4.9M, max 39.3M, 34.4M free. Dec 13 02:39:20.436356 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:39:20.460642 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 02:39:20.461047 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:39:20.818921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:39:20.818951 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:39:20.820718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:39:20.821476 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 02:39:20.822312 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 02:39:20.833286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:39:20.834380 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 02:39:20.845704 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 02:39:20.848754 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 02:39:20.849347 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:39:20.849381 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:39:20.851048 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 02:39:20.852776 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 02:39:20.854349 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 02:39:20.855039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:39:20.858880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 02:39:20.863797 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 02:39:20.864388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:39:20.865236 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 02:39:20.865810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:39:20.868780 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:39:20.871743 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 02:39:20.880176 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 02:39:20.883049 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 02:39:20.886696 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 02:39:20.888775 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 02:39:20.900688 kernel: loop0: detected capacity change from 0 to 8 Dec 13 02:39:20.898831 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 02:39:20.914765 systemd-journald[1085]: Time spent on flushing to /var/log/journal/128a78b7cba94b1fadca9b7b03f96146 is 65.381ms for 945 entries. Dec 13 02:39:20.914765 systemd-journald[1085]: System Journal (/var/log/journal/128a78b7cba94b1fadca9b7b03f96146) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:39:21.001807 systemd-journald[1085]: Received client request to flush runtime journal. Dec 13 02:39:21.001854 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:39:21.001873 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:39:20.937241 udevadm[1125]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:39:20.938663 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 02:39:20.940537 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 02:39:20.950813 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 02:39:20.962807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:39:21.003814 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 02:39:21.078029 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:39:21.081511 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 02:39:21.082984 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 02:39:21.086081 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 02:39:21.104967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:39:21.140976 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Dec 13 02:39:21.140998 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Dec 13 02:39:21.149647 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:39:21.165686 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 02:39:21.241656 kernel: loop4: detected capacity change from 0 to 8 Dec 13 02:39:21.247814 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 02:39:21.304669 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 02:39:21.391670 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 02:39:21.475931 (sd-merge)[1147]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 02:39:21.478962 (sd-merge)[1147]: Merged extensions into '/usr'. Dec 13 02:39:21.487155 systemd[1]: Reloading requested from client PID 1123 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 02:39:21.487171 systemd[1]: Reloading... Dec 13 02:39:21.591639 zram_generator::config[1169]: No configuration found. Dec 13 02:39:21.783631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:39:21.843838 systemd[1]: Reloading finished in 356 ms. Dec 13 02:39:21.871033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 02:39:21.873576 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 02:39:21.886831 systemd[1]: Starting ensure-sysext.service... Dec 13 02:39:21.889467 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:39:21.907850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:39:21.913743 systemd[1]: Reloading requested from client PID 1229 ('systemctl') (unit ensure-sysext.service)... Dec 13 02:39:21.913758 systemd[1]: Reloading... Dec 13 02:39:21.921506 systemd-tmpfiles[1230]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:39:21.924993 systemd-tmpfiles[1230]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 02:39:21.925922 systemd-tmpfiles[1230]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:39:21.926227 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Dec 13 02:39:21.926288 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Dec 13 02:39:21.930921 systemd-tmpfiles[1230]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:39:21.930933 systemd-tmpfiles[1230]: Skipping /boot Dec 13 02:39:21.939053 systemd-tmpfiles[1230]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:39:21.939066 systemd-tmpfiles[1230]: Skipping /boot Dec 13 02:39:21.977032 systemd-udevd[1232]: Using default interface naming scheme 'v255'. Dec 13 02:39:21.986653 zram_generator::config[1257]: No configuration found. Dec 13 02:39:21.993805 ldconfig[1118]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:39:22.090385 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1280) Dec 13 02:39:22.112341 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1280) Dec 13 02:39:22.118677 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1283) Dec 13 02:39:22.186656 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:39:22.197190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:39:22.221656 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:39:22.238888 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 02:39:22.251788 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 02:39:22.284554 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 02:39:22.285068 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 02:39:22.286192 systemd[1]: Reloading finished in 372 ms. Dec 13 02:39:22.298658 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:39:22.305045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:39:22.306137 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 02:39:22.313460 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:39:22.324283 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 02:39:22.324350 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 02:39:22.327689 kernel: Console: switching to colour dummy device 80x25 Dec 13 02:39:22.328697 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 02:39:22.328731 kernel: [drm] features: -context_init Dec 13 02:39:22.331629 kernel: [drm] number of scanouts: 1 Dec 13 02:39:22.331672 kernel: [drm] number of cap sets: 0 Dec 13 02:39:22.333626 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 02:39:22.342178 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 02:39:22.342270 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:39:22.341691 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:39:22.346634 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 02:39:22.348436 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:39:22.356314 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 02:39:22.356570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:39:22.359196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:39:22.362052 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:39:22.364793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:39:22.364975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:39:22.366836 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 02:39:22.370869 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 02:39:22.374473 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:39:22.378500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:39:22.381095 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 02:39:22.384141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:39:22.384238 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:39:22.387339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:39:22.387478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:39:22.389597 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:39:22.389757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:39:22.404383 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:39:22.404768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:39:22.414009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:39:22.419005 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:39:22.420978 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:39:22.423127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:39:22.428030 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 02:39:22.428110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:39:22.431525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:39:22.431725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:39:22.436954 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 02:39:22.440432 systemd[1]: Finished ensure-sysext.service. Dec 13 02:39:22.445933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:39:22.446499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:39:22.455186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:39:22.460706 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 02:39:22.466113 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 02:39:22.471829 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:39:22.472022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:39:22.475763 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:39:22.475969 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:39:22.478529 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:39:22.478935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:39:22.489585 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 02:39:22.502806 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 02:39:22.505659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:39:22.507476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:39:22.508306 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 02:39:22.518860 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 02:39:22.530845 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:39:22.554536 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 02:39:22.557445 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 02:39:22.557911 augenrules[1393]: No rules Dec 13 02:39:22.560507 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:39:22.561790 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 02:39:22.564658 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:39:22.569538 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 02:39:22.573543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:39:22.587145 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:39:22.588831 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 02:39:22.622954 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 02:39:22.639925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:39:22.670083 systemd-resolved[1352]: Positive Trust Anchors: Dec 13 02:39:22.670362 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:39:22.670453 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:39:22.674802 systemd-resolved[1352]: Using system hostname 'ci-4081-2-1-7-a50b4b34f3.novalocal'. Dec 13 02:39:22.676132 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:39:22.677033 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:39:22.683418 systemd-networkd[1351]: lo: Link UP Dec 13 02:39:22.683430 systemd-networkd[1351]: lo: Gained carrier Dec 13 02:39:22.684683 systemd-networkd[1351]: Enumeration completed Dec 13 02:39:22.684765 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:39:22.685345 systemd[1]: Reached target network.target - Network. Dec 13 02:39:22.687415 systemd-networkd[1351]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:39:22.687492 systemd-networkd[1351]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:39:22.688719 systemd-networkd[1351]: eth0: Link UP Dec 13 02:39:22.688824 systemd-networkd[1351]: eth0: Gained carrier Dec 13 02:39:22.688933 systemd-networkd[1351]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:39:22.691882 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 02:39:22.693997 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 02:39:22.697714 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:39:22.699061 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 02:39:22.700269 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 02:39:22.701905 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 02:39:22.703743 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:39:22.703794 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:39:22.703858 systemd-networkd[1351]: eth0: DHCPv4 address 172.24.4.28/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 02:39:22.705164 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Dec 13 02:39:22.706072 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 02:39:22.708256 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 02:39:22.709935 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 02:39:22.712379 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:39:22.715170 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 02:39:22.717995 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 02:39:22.724794 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 02:39:22.728371 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 02:39:22.729072 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:39:22.729564 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:39:22.730084 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:39:22.730115 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:39:22.736676 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 02:39:22.740254 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 02:39:22.746763 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 02:39:22.753187 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 02:39:22.759781 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 02:39:22.760333 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 02:39:22.765832 jq[1425]: false Dec 13 02:39:22.769834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 02:39:22.779709 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 02:39:22.783859 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 02:39:22.788381 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 02:39:22.794862 extend-filesystems[1426]: Found loop4 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found loop5 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found loop6 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found loop7 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found vda Dec 13 02:39:22.794862 extend-filesystems[1426]: Found vda1 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found vda2 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found vda3 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found usr Dec 13 02:39:22.794862 extend-filesystems[1426]: Found vda4 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found vda6 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found vda7 Dec 13 02:39:22.794862 extend-filesystems[1426]: Found vda9 Dec 13 02:39:22.794862 extend-filesystems[1426]: Checking size of /dev/vda9 Dec 13 02:39:22.795952 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 02:39:22.808509 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:39:22.812514 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:39:22.814759 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 02:39:22.831740 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 02:39:22.842868 dbus-daemon[1424]: [system] SELinux support is enabled Dec 13 02:39:22.833003 systemd-timesyncd[1376]: Contacted time server 51.255.141.76:123 (0.flatcar.pool.ntp.org). Dec 13 02:39:22.859045 extend-filesystems[1426]: Resized partition /dev/vda9 Dec 13 02:39:22.859458 jq[1443]: true Dec 13 02:39:22.833045 systemd-timesyncd[1376]: Initial clock synchronization to Fri 2024-12-13 02:39:22.775204 UTC. Dec 13 02:39:22.837984 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:39:22.838149 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 02:39:22.838416 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:39:22.838552 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 02:39:22.847833 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 02:39:22.852371 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:39:22.852580 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 02:39:22.874087 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Dec 13 02:39:22.913412 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 02:39:22.913441 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1298) Dec 13 02:39:22.913485 jq[1449]: true Dec 13 02:39:22.882643 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:39:22.913842 update_engine[1441]: I20241213 02:39:22.877557 1441 main.cc:92] Flatcar Update Engine starting Dec 13 02:39:22.913842 update_engine[1441]: I20241213 02:39:22.904361 1441 update_check_scheduler.cc:74] Next update check in 7m27s Dec 13 02:39:22.882669 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 02:39:22.912380 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:39:22.912400 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 02:39:22.921422 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 02:39:22.926897 systemd[1]: Started update-engine.service - Update Engine. Dec 13 02:39:22.933627 tar[1446]: linux-amd64/helm Dec 13 02:39:22.934915 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 02:39:22.991384 systemd-logind[1434]: New seat seat0. Dec 13 02:39:22.998532 systemd-logind[1434]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:39:22.998555 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:39:22.998804 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 02:39:23.196020 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:39:23.394847 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:39:23.430534 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 02:39:23.443137 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 02:39:23.460911 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:39:23.461136 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 02:39:23.472863 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 02:39:23.539665 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 02:39:23.552121 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 02:39:23.573309 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 02:39:23.580215 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 02:39:23.581580 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 02:39:23.858716 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 02:39:23.877799 systemd[1]: Started sshd@0-172.24.4.28:22-172.24.4.1:39488.service - OpenSSH per-connection server daemon (172.24.4.1:39488). Dec 13 02:39:24.202217 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 02:39:24.202217 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 02:39:24.202217 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 02:39:24.224864 containerd[1452]: time="2024-12-13T02:39:24.201726725Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 02:39:24.225345 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:39:24.203792 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:39:24.232840 extend-filesystems[1426]: Resized filesystem in /dev/vda9 Dec 13 02:39:24.204450 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 02:39:24.221751 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 02:39:24.251143 systemd[1]: Starting sshkeys.service... Dec 13 02:39:24.285415 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 02:39:24.292178 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 02:39:24.303107 containerd[1452]: time="2024-12-13T02:39:24.302402654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.303801144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.303836804Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.303855302Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.304013277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.304031735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.304121345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.304137394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.304284597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.304302497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.304317032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:39:24.304647 containerd[1452]: time="2024-12-13T02:39:24.304332414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:39:24.305216 containerd[1452]: time="2024-12-13T02:39:24.304413432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:39:24.305216 containerd[1452]: time="2024-12-13T02:39:24.304632176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:39:24.305216 containerd[1452]: time="2024-12-13T02:39:24.304731721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:39:24.305216 containerd[1452]: time="2024-12-13T02:39:24.304748148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:39:24.305216 containerd[1452]: time="2024-12-13T02:39:24.304839182Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:39:24.305216 containerd[1452]: time="2024-12-13T02:39:24.304897482Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:39:24.316352 containerd[1452]: time="2024-12-13T02:39:24.316252509Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:39:24.316352 containerd[1452]: time="2024-12-13T02:39:24.316345186Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:39:24.316663 containerd[1452]: time="2024-12-13T02:39:24.316365993Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 02:39:24.316663 containerd[1452]: time="2024-12-13T02:39:24.316383286Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 02:39:24.316663 containerd[1452]: time="2024-12-13T02:39:24.316399643Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:39:24.316663 containerd[1452]: time="2024-12-13T02:39:24.316544387Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:39:24.317048 containerd[1452]: time="2024-12-13T02:39:24.316853169Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:39:24.317048 containerd[1452]: time="2024-12-13T02:39:24.316959952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 02:39:24.317048 containerd[1452]: time="2024-12-13T02:39:24.316980271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 02:39:24.317048 containerd[1452]: time="2024-12-13T02:39:24.316995146Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 02:39:24.317048 containerd[1452]: time="2024-12-13T02:39:24.317011194Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:39:24.317048 containerd[1452]: time="2024-12-13T02:39:24.317024455Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:39:24.317048 containerd[1452]: time="2024-12-13T02:39:24.317037656Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317053067Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317068747Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317082586Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317095647Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317110511Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317135977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317155162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317169925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317184989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317198230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317211769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317230575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317245409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317647 containerd[1452]: time="2024-12-13T02:39:24.317259257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317279825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317294370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317308418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317323490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317343442Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317364010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317377171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317388689Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317444401Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317464979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317477702Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317490525Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 02:39:24.317958 containerd[1452]: time="2024-12-13T02:39:24.317501268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.318205 containerd[1452]: time="2024-12-13T02:39:24.317513901Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 02:39:24.318205 containerd[1452]: time="2024-12-13T02:39:24.317524235Z" level=info msg="NRI interface is disabled by configuration." Dec 13 02:39:24.318205 containerd[1452]: time="2024-12-13T02:39:24.317535026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:39:24.320548 containerd[1452]: time="2024-12-13T02:39:24.319856190Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:39:24.320548 containerd[1452]: time="2024-12-13T02:39:24.319935766Z" level=info msg="Connect containerd service" Dec 13 02:39:24.320548 containerd[1452]: time="2024-12-13T02:39:24.319984925Z" level=info msg="using legacy CRI server" Dec 13 02:39:24.320548 containerd[1452]: time="2024-12-13T02:39:24.319993587Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 02:39:24.320548 containerd[1452]: time="2024-12-13T02:39:24.320074108Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:39:24.321380 containerd[1452]: time="2024-12-13T02:39:24.320738942Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:39:24.321380 containerd[1452]: time="2024-12-13T02:39:24.320883567Z" level=info msg="Start subscribing containerd event" Dec 13 02:39:24.322560 containerd[1452]: time="2024-12-13T02:39:24.322522962Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:39:24.322627 containerd[1452]: time="2024-12-13T02:39:24.322590730Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:39:24.324487 containerd[1452]: time="2024-12-13T02:39:24.323473162Z" level=info msg="Start recovering state" Dec 13 02:39:24.324487 containerd[1452]: time="2024-12-13T02:39:24.323625503Z" level=info msg="Start event monitor" Dec 13 02:39:24.324487 containerd[1452]: time="2024-12-13T02:39:24.323646091Z" level=info msg="Start snapshots syncer" Dec 13 02:39:24.324487 containerd[1452]: time="2024-12-13T02:39:24.323663702Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:39:24.324487 containerd[1452]: time="2024-12-13T02:39:24.323673438Z" level=info msg="Start streaming server" Dec 13 02:39:24.324487 containerd[1452]: time="2024-12-13T02:39:24.323754626Z" level=info msg="containerd successfully booted in 0.146432s" Dec 13 02:39:24.323833 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 02:39:24.494029 systemd-networkd[1351]: eth0: Gained IPv6LL Dec 13 02:39:24.501135 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 02:39:24.507183 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 02:39:24.535950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:24.551046 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 02:39:24.579405 tar[1446]: linux-amd64/LICENSE Dec 13 02:39:24.580735 tar[1446]: linux-amd64/README.md Dec 13 02:39:24.590546 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 02:39:24.660423 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 02:39:25.270645 sshd[1506]: Accepted publickey for core from 172.24.4.1 port 39488 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:25.331200 sshd[1506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:25.361929 systemd-logind[1434]: New session 1 of user core. Dec 13 02:39:25.366915 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 02:39:25.387740 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 02:39:25.453520 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 02:39:25.471380 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 02:39:25.525401 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:39:26.013060 systemd[1535]: Queued start job for default target default.target. Dec 13 02:39:26.025526 systemd[1535]: Created slice app.slice - User Application Slice. Dec 13 02:39:26.025556 systemd[1535]: Reached target paths.target - Paths. Dec 13 02:39:26.025570 systemd[1535]: Reached target timers.target - Timers. Dec 13 02:39:26.026945 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 02:39:26.047148 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 02:39:26.047277 systemd[1535]: Reached target sockets.target - Sockets. Dec 13 02:39:26.047293 systemd[1535]: Reached target basic.target - Basic System. Dec 13 02:39:26.047332 systemd[1535]: Reached target default.target - Main User Target. Dec 13 02:39:26.047358 systemd[1535]: Startup finished in 508ms. Dec 13 02:39:26.047465 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 02:39:26.061281 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 02:39:26.465048 systemd[1]: Started sshd@1-172.24.4.28:22-172.24.4.1:59582.service - OpenSSH per-connection server daemon (172.24.4.1:59582). Dec 13 02:39:27.440897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:27.441380 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:39:27.955253 sshd[1547]: Accepted publickey for core from 172.24.4.1 port 59582 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:27.957177 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:27.971372 systemd-logind[1434]: New session 2 of user core. Dec 13 02:39:27.978092 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 02:39:28.674924 sshd[1547]: pam_unix(sshd:session): session closed for user core Dec 13 02:39:28.693792 systemd[1]: sshd@1-172.24.4.28:22-172.24.4.1:59582.service: Deactivated successfully. Dec 13 02:39:28.696902 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:39:28.698512 systemd-logind[1434]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:39:28.710386 systemd[1]: Started sshd@2-172.24.4.28:22-172.24.4.1:59588.service - OpenSSH per-connection server daemon (172.24.4.1:59588). Dec 13 02:39:28.713654 systemd-logind[1434]: Removed session 2. Dec 13 02:39:28.728462 login[1502]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:39:28.746500 login[1503]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:39:28.750848 systemd-logind[1434]: New session 3 of user core. Dec 13 02:39:28.754093 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 02:39:28.771854 systemd-logind[1434]: New session 4 of user core. Dec 13 02:39:28.777815 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 02:39:29.811026 coreos-metadata[1421]: Dec 13 02:39:29.810 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:39:29.886062 coreos-metadata[1421]: Dec 13 02:39:29.885 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 02:39:30.030930 sshd[1569]: Accepted publickey for core from 172.24.4.1 port 59588 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:30.033505 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:30.043553 systemd-logind[1434]: New session 5 of user core. Dec 13 02:39:30.053057 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 02:39:30.096862 coreos-metadata[1421]: Dec 13 02:39:30.096 INFO Fetch successful Dec 13 02:39:30.097677 coreos-metadata[1421]: Dec 13 02:39:30.097 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 02:39:30.115911 coreos-metadata[1421]: Dec 13 02:39:30.115 INFO Fetch successful Dec 13 02:39:30.115911 coreos-metadata[1421]: Dec 13 02:39:30.115 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 02:39:30.132294 coreos-metadata[1421]: Dec 13 02:39:30.132 INFO Fetch successful Dec 13 02:39:30.132294 coreos-metadata[1421]: Dec 13 02:39:30.132 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 02:39:30.148117 coreos-metadata[1421]: Dec 13 02:39:30.147 INFO Fetch successful Dec 13 02:39:30.148117 coreos-metadata[1421]: Dec 13 02:39:30.148 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 02:39:30.164156 coreos-metadata[1421]: Dec 13 02:39:30.164 INFO Fetch successful Dec 13 02:39:30.164156 coreos-metadata[1421]: Dec 13 02:39:30.164 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 02:39:30.181559 coreos-metadata[1421]: Dec 13 02:39:30.179 INFO Fetch successful Dec 13 02:39:30.197343 kubelet[1555]: E1213 02:39:30.197234 1555 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:39:30.204969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:39:30.205278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:39:30.206870 systemd[1]: kubelet.service: Consumed 2.091s CPU time. Dec 13 02:39:30.230125 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 02:39:30.231520 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 02:39:30.692914 sshd[1569]: pam_unix(sshd:session): session closed for user core Dec 13 02:39:30.698761 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:39:30.699461 systemd[1]: sshd@2-172.24.4.28:22-172.24.4.1:59588.service: Deactivated successfully. Dec 13 02:39:30.703531 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:39:30.707846 systemd-logind[1434]: Removed session 5. Dec 13 02:39:31.392128 coreos-metadata[1514]: Dec 13 02:39:31.391 WARN failed to locate config-drive, using the metadata service API instead Dec 13 02:39:31.433192 coreos-metadata[1514]: Dec 13 02:39:31.433 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 02:39:31.448560 coreos-metadata[1514]: Dec 13 02:39:31.448 INFO Fetch successful Dec 13 02:39:31.448560 coreos-metadata[1514]: Dec 13 02:39:31.448 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:39:31.461158 coreos-metadata[1514]: Dec 13 02:39:31.461 INFO Fetch successful Dec 13 02:39:31.466424 unknown[1514]: wrote ssh authorized keys file for user: core Dec 13 02:39:31.505447 update-ssh-keys[1609]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:39:31.506479 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 02:39:31.509273 systemd[1]: Finished sshkeys.service. Dec 13 02:39:31.515289 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 02:39:31.515852 systemd[1]: Startup finished in 1.168s (kernel) + 16.733s (initrd) + 12.035s (userspace) = 29.937s. Dec 13 02:39:40.303708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:39:40.317035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:40.676891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:40.682808 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:39:40.694781 systemd[1]: Started sshd@3-172.24.4.28:22-172.24.4.1:60414.service - OpenSSH per-connection server daemon (172.24.4.1:60414). Dec 13 02:39:40.900043 kubelet[1621]: E1213 02:39:40.899934 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:39:40.903455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:39:40.903722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:39:42.061885 sshd[1627]: Accepted publickey for core from 172.24.4.1 port 60414 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:42.064775 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:42.077679 systemd-logind[1434]: New session 6 of user core. Dec 13 02:39:42.086991 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 02:39:42.926261 sshd[1627]: pam_unix(sshd:session): session closed for user core Dec 13 02:39:42.940366 systemd[1]: sshd@3-172.24.4.28:22-172.24.4.1:60414.service: Deactivated successfully. Dec 13 02:39:42.944748 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:39:42.948961 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:39:42.956253 systemd[1]: Started sshd@4-172.24.4.28:22-172.24.4.1:60420.service - OpenSSH per-connection server daemon (172.24.4.1:60420). Dec 13 02:39:42.960960 systemd-logind[1434]: Removed session 6. Dec 13 02:39:44.361020 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 60420 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:44.364055 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:44.375929 systemd-logind[1434]: New session 7 of user core. Dec 13 02:39:44.384006 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 02:39:45.058927 sshd[1637]: pam_unix(sshd:session): session closed for user core Dec 13 02:39:45.069920 systemd[1]: sshd@4-172.24.4.28:22-172.24.4.1:60420.service: Deactivated successfully. Dec 13 02:39:45.073140 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:39:45.077003 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:39:45.082166 systemd[1]: Started sshd@5-172.24.4.28:22-172.24.4.1:40550.service - OpenSSH per-connection server daemon (172.24.4.1:40550). Dec 13 02:39:45.085150 systemd-logind[1434]: Removed session 7. Dec 13 02:39:46.225448 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 40550 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:46.228019 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:46.239054 systemd-logind[1434]: New session 8 of user core. Dec 13 02:39:46.246900 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:39:46.910546 sshd[1644]: pam_unix(sshd:session): session closed for user core Dec 13 02:39:46.921775 systemd[1]: sshd@5-172.24.4.28:22-172.24.4.1:40550.service: Deactivated successfully. Dec 13 02:39:46.924689 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:39:46.927909 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:39:46.939437 systemd[1]: Started sshd@6-172.24.4.28:22-172.24.4.1:40552.service - OpenSSH per-connection server daemon (172.24.4.1:40552). Dec 13 02:39:46.942811 systemd-logind[1434]: Removed session 8. Dec 13 02:39:48.337003 sshd[1651]: Accepted publickey for core from 172.24.4.1 port 40552 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:48.339834 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:48.352042 systemd-logind[1434]: New session 9 of user core. Dec 13 02:39:48.365463 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:39:48.795827 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:39:48.796578 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:39:48.818654 sudo[1654]: pam_unix(sudo:session): session closed for user root Dec 13 02:39:48.995095 sshd[1651]: pam_unix(sshd:session): session closed for user core Dec 13 02:39:49.006090 systemd[1]: sshd@6-172.24.4.28:22-172.24.4.1:40552.service: Deactivated successfully. Dec 13 02:39:49.009096 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:39:49.011953 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:39:49.019218 systemd[1]: Started sshd@7-172.24.4.28:22-172.24.4.1:40554.service - OpenSSH per-connection server daemon (172.24.4.1:40554). Dec 13 02:39:49.022688 systemd-logind[1434]: Removed session 9. Dec 13 02:39:50.474888 sshd[1659]: Accepted publickey for core from 172.24.4.1 port 40554 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:50.477873 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:50.484409 systemd-logind[1434]: New session 10 of user core. Dec 13 02:39:50.493878 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:39:50.934422 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:39:50.935054 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:39:50.936841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:39:50.942080 sudo[1663]: pam_unix(sudo:session): session closed for user root Dec 13 02:39:50.946121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:39:50.949840 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:39:50.950533 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:39:50.973370 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 02:39:50.974346 auditctl[1669]: No rules Dec 13 02:39:50.974851 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:39:50.975096 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 02:39:50.986068 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:39:51.062656 augenrules[1687]: No rules Dec 13 02:39:51.064523 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:39:51.068080 sudo[1662]: pam_unix(sudo:session): session closed for user root Dec 13 02:39:51.362170 systemd[1]: Started sshd@8-172.24.4.28:22-172.24.4.1:40562.service - OpenSSH per-connection server daemon (172.24.4.1:40562). Dec 13 02:39:51.602781 sshd[1659]: pam_unix(sshd:session): session closed for user core Dec 13 02:39:51.612912 systemd[1]: sshd@7-172.24.4.28:22-172.24.4.1:40554.service: Deactivated successfully. Dec 13 02:39:51.613536 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:39:51.620156 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:39:51.625430 systemd-logind[1434]: Removed session 10. Dec 13 02:39:51.683959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:39:51.700147 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:39:51.889331 kubelet[1702]: E1213 02:39:51.889118 1702 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:39:51.894348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:39:51.894831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:39:52.858010 sshd[1693]: Accepted publickey for core from 172.24.4.1 port 40562 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:39:52.861178 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:39:52.872180 systemd-logind[1434]: New session 11 of user core. Dec 13 02:39:52.885922 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:39:53.318387 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:39:53.319787 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:39:53.907068 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 02:39:53.907275 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 02:39:54.867457 dockerd[1728]: time="2024-12-13T02:39:54.867312009Z" level=info msg="Starting up" Dec 13 02:39:55.097854 dockerd[1728]: time="2024-12-13T02:39:55.097783041Z" level=info msg="Loading containers: start." Dec 13 02:39:55.274855 kernel: Initializing XFRM netlink socket Dec 13 02:39:55.541229 systemd-networkd[1351]: docker0: Link UP Dec 13 02:39:55.820693 dockerd[1728]: time="2024-12-13T02:39:55.820191464Z" level=info msg="Loading containers: done." Dec 13 02:39:55.859668 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck307414487-merged.mount: Deactivated successfully. Dec 13 02:39:55.860482 dockerd[1728]: time="2024-12-13T02:39:55.860070887Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:39:55.860482 dockerd[1728]: time="2024-12-13T02:39:55.860263769Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 02:39:55.862133 dockerd[1728]: time="2024-12-13T02:39:55.860488167Z" level=info msg="Daemon has completed initialization" Dec 13 02:39:55.921570 dockerd[1728]: time="2024-12-13T02:39:55.921485304Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:39:55.922228 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 02:39:58.919057 containerd[1452]: time="2024-12-13T02:39:58.918891126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:40:00.243650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659753744.mount: Deactivated successfully. Dec 13 02:40:02.053290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:40:02.060951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:40:02.204154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:40:02.204544 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:40:02.682354 kubelet[1935]: E1213 02:40:02.682306 1935 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:40:02.685468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:40:02.685692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:40:02.689859 containerd[1452]: time="2024-12-13T02:40:02.688901183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:02.692063 containerd[1452]: time="2024-12-13T02:40:02.691179154Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Dec 13 02:40:02.697189 containerd[1452]: time="2024-12-13T02:40:02.697129360Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:02.701860 containerd[1452]: time="2024-12-13T02:40:02.701818129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:02.703372 containerd[1452]: time="2024-12-13T02:40:02.703326514Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.784327529s" Dec 13 02:40:02.703425 containerd[1452]: time="2024-12-13T02:40:02.703376928Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:40:02.728808 containerd[1452]: time="2024-12-13T02:40:02.728763891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:40:05.957645 containerd[1452]: time="2024-12-13T02:40:05.956811238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:05.960048 containerd[1452]: time="2024-12-13T02:40:05.959782934Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Dec 13 02:40:05.961281 containerd[1452]: time="2024-12-13T02:40:05.961200606Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:05.964644 containerd[1452]: time="2024-12-13T02:40:05.964575321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:05.966359 containerd[1452]: time="2024-12-13T02:40:05.966127331Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.237314631s" Dec 13 02:40:05.966359 containerd[1452]: time="2024-12-13T02:40:05.966185540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:40:05.991788 containerd[1452]: time="2024-12-13T02:40:05.991739037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:40:08.542239 containerd[1452]: time="2024-12-13T02:40:08.541745892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:08.560013 containerd[1452]: time="2024-12-13T02:40:08.559847555Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Dec 13 02:40:08.580144 containerd[1452]: time="2024-12-13T02:40:08.580007665Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:08.590654 containerd[1452]: time="2024-12-13T02:40:08.590401138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:08.593677 update_engine[1441]: I20241213 02:40:08.592748 1441 update_attempter.cc:509] Updating boot flags... Dec 13 02:40:08.597081 containerd[1452]: time="2024-12-13T02:40:08.596418224Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.604560006s" Dec 13 02:40:08.597081 containerd[1452]: time="2024-12-13T02:40:08.596568875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:40:08.648110 containerd[1452]: time="2024-12-13T02:40:08.648031621Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:40:08.770989 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1973) Dec 13 02:40:11.915971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719765196.mount: Deactivated successfully. Dec 13 02:40:12.376568 containerd[1452]: time="2024-12-13T02:40:12.376480375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:12.377634 containerd[1452]: time="2024-12-13T02:40:12.377445002Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 02:40:12.378856 containerd[1452]: time="2024-12-13T02:40:12.378812538Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:12.381271 containerd[1452]: time="2024-12-13T02:40:12.381246410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:12.382042 containerd[1452]: time="2024-12-13T02:40:12.381828223Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 3.733729087s" Dec 13 02:40:12.382042 containerd[1452]: time="2024-12-13T02:40:12.381870392Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:40:12.405027 containerd[1452]: time="2024-12-13T02:40:12.404992076Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:40:12.803922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:40:12.817186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:40:13.074823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:40:13.078709 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:40:13.134763 kubelet[2000]: E1213 02:40:13.134673 2000 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:40:13.138112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:40:13.138412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:40:13.497546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373785069.mount: Deactivated successfully. Dec 13 02:40:15.217214 containerd[1452]: time="2024-12-13T02:40:15.216995724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:16.023061 containerd[1452]: time="2024-12-13T02:40:16.022895792Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 02:40:16.170408 containerd[1452]: time="2024-12-13T02:40:16.170278182Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:16.381071 containerd[1452]: time="2024-12-13T02:40:16.380918744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:16.383999 containerd[1452]: time="2024-12-13T02:40:16.382991738Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.977946564s" Dec 13 02:40:16.383999 containerd[1452]: time="2024-12-13T02:40:16.383077739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:40:16.434691 containerd[1452]: time="2024-12-13T02:40:16.434575047Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:40:20.305735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615710539.mount: Deactivated successfully. Dec 13 02:40:20.312676 containerd[1452]: time="2024-12-13T02:40:20.312200514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:20.314563 containerd[1452]: time="2024-12-13T02:40:20.314491779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 02:40:20.315994 containerd[1452]: time="2024-12-13T02:40:20.315867667Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:20.319736 containerd[1452]: time="2024-12-13T02:40:20.319570837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:20.321183 containerd[1452]: time="2024-12-13T02:40:20.321136508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 3.886457478s" Dec 13 02:40:20.321183 containerd[1452]: time="2024-12-13T02:40:20.321182475Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:40:20.371434 containerd[1452]: time="2024-12-13T02:40:20.371276945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:40:21.906525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079435582.mount: Deactivated successfully. Dec 13 02:40:23.303258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 02:40:23.312839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:40:23.868698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:40:23.884542 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:40:24.204248 kubelet[2112]: E1213 02:40:24.204059 2112 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:40:24.207335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:40:24.207499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:40:27.594806 containerd[1452]: time="2024-12-13T02:40:27.594709349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:27.596166 containerd[1452]: time="2024-12-13T02:40:27.596104114Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Dec 13 02:40:27.597150 containerd[1452]: time="2024-12-13T02:40:27.597092550Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:27.609102 containerd[1452]: time="2024-12-13T02:40:27.609031415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:40:27.610937 containerd[1452]: time="2024-12-13T02:40:27.610249201Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 7.238911633s" Dec 13 02:40:27.610937 containerd[1452]: time="2024-12-13T02:40:27.610320153Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:40:32.400656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:40:32.408121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:40:32.430046 systemd[1]: Reloading requested from client PID 2191 ('systemctl') (unit session-11.scope)... Dec 13 02:40:32.430065 systemd[1]: Reloading... Dec 13 02:40:32.536643 zram_generator::config[2225]: No configuration found. Dec 13 02:40:32.674991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:40:32.758785 systemd[1]: Reloading finished in 328 ms. Dec 13 02:40:32.808332 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:40:32.808719 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:40:32.809056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:40:32.813019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:40:32.959925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:40:32.959963 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:40:33.394116 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:40:33.394116 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:40:33.394116 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:40:33.394116 kubelet[2293]: I1213 02:40:33.394081 2293 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:40:34.391738 kubelet[2293]: I1213 02:40:34.391573 2293 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:40:34.392721 kubelet[2293]: I1213 02:40:34.392075 2293 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:40:34.392917 kubelet[2293]: I1213 02:40:34.392745 2293 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:40:35.104962 kubelet[2293]: E1213 02:40:35.104894 2293 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.139030 kubelet[2293]: I1213 02:40:35.137944 2293 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:40:35.401130 kubelet[2293]: I1213 02:40:35.400951 2293 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:40:35.409361 kubelet[2293]: I1213 02:40:35.408417 2293 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:40:35.411469 kubelet[2293]: I1213 02:40:35.411283 2293 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:40:35.412692 kubelet[2293]: I1213 02:40:35.412438 2293 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:40:35.412692 kubelet[2293]: I1213 02:40:35.412495 2293 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:40:35.412884 kubelet[2293]: I1213 02:40:35.412799 2293 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:40:35.413836 kubelet[2293]: W1213 02:40:35.413734 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-7-a50b4b34f3.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.413836 kubelet[2293]: E1213 02:40:35.413808 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-7-a50b4b34f3.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.417309 kubelet[2293]: I1213 02:40:35.417248 2293 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:40:35.417402 kubelet[2293]: I1213 02:40:35.417321 2293 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:40:35.417458 kubelet[2293]: I1213 02:40:35.417402 2293 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:40:35.417458 kubelet[2293]: I1213 02:40:35.417435 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:40:35.421113 kubelet[2293]: W1213 02:40:35.420826 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.421113 kubelet[2293]: E1213 02:40:35.420963 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.422019 kubelet[2293]: I1213 02:40:35.421749 2293 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:40:35.432826 kubelet[2293]: I1213 02:40:35.432738 2293 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:40:35.432995 kubelet[2293]: W1213 02:40:35.432926 2293 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:40:35.434717 kubelet[2293]: I1213 02:40:35.434593 2293 server.go:1256] "Started kubelet" Dec 13 02:40:35.437312 kubelet[2293]: I1213 02:40:35.437235 2293 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:40:35.438989 kubelet[2293]: I1213 02:40:35.438910 2293 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:40:35.442139 kubelet[2293]: I1213 02:40:35.441722 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:40:35.442139 kubelet[2293]: I1213 02:40:35.441954 2293 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:40:35.445287 kubelet[2293]: I1213 02:40:35.443767 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:40:35.447511 kubelet[2293]: E1213 02:40:35.447467 2293 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.28:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-7-a50b4b34f3.novalocal.18109c3b64212f9d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-7-a50b4b34f3.novalocal,UID:ci-4081-2-1-7-a50b4b34f3.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-7-a50b4b34f3.novalocal,},FirstTimestamp:2024-12-13 02:40:35.434540957 +0000 UTC m=+2.470757070,LastTimestamp:2024-12-13 02:40:35.434540957 +0000 UTC m=+2.470757070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-7-a50b4b34f3.novalocal,}" Dec 13 02:40:35.447938 kubelet[2293]: I1213 02:40:35.447857 2293 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:40:35.448878 kubelet[2293]: I1213 02:40:35.448828 2293 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:40:35.449036 kubelet[2293]: I1213 02:40:35.448983 2293 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:40:35.453987 kubelet[2293]: E1213 02:40:35.452441 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7-a50b4b34f3.novalocal?timeout=10s\": dial tcp 172.24.4.28:6443: connect: connection refused" interval="200ms" Dec 13 02:40:35.453987 kubelet[2293]: W1213 02:40:35.452865 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.453987 kubelet[2293]: I1213 02:40:35.453022 2293 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:40:35.453987 kubelet[2293]: E1213 02:40:35.453073 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.453987 kubelet[2293]: I1213 02:40:35.453147 2293 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:40:35.457956 kubelet[2293]: I1213 02:40:35.457473 2293 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:40:35.458648 kubelet[2293]: E1213 02:40:35.457479 2293 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:40:35.486370 kubelet[2293]: I1213 02:40:35.486309 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:40:35.494317 kubelet[2293]: I1213 02:40:35.494264 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:40:35.494317 kubelet[2293]: I1213 02:40:35.494308 2293 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:40:35.494317 kubelet[2293]: I1213 02:40:35.494328 2293 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:40:35.494488 kubelet[2293]: E1213 02:40:35.494382 2293 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:40:35.498901 kubelet[2293]: W1213 02:40:35.498287 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.498901 kubelet[2293]: E1213 02:40:35.498480 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:35.507777 kubelet[2293]: I1213 02:40:35.507710 2293 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:40:35.507777 kubelet[2293]: I1213 02:40:35.507766 2293 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:40:35.507948 kubelet[2293]: I1213 02:40:35.507794 2293 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:40:35.519723 kubelet[2293]: I1213 02:40:35.519684 2293 policy_none.go:49] "None policy: Start" Dec 13 02:40:35.520399 kubelet[2293]: I1213 02:40:35.520377 2293 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:40:35.520452 kubelet[2293]: I1213 02:40:35.520433 2293 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:40:35.536979 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:40:35.551371 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:40:35.551978 kubelet[2293]: I1213 02:40:35.551770 2293 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.552651 kubelet[2293]: E1213 02:40:35.552247 2293 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.28:6443/api/v1/nodes\": dial tcp 172.24.4.28:6443: connect: connection refused" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.557114 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:40:35.568884 kubelet[2293]: I1213 02:40:35.568840 2293 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:40:35.569276 kubelet[2293]: I1213 02:40:35.569248 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:40:35.573141 kubelet[2293]: E1213 02:40:35.573014 2293 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" not found" Dec 13 02:40:35.594667 kubelet[2293]: I1213 02:40:35.594639 2293 topology_manager.go:215] "Topology Admit Handler" podUID="c2baf804f9f14ef8331ed6c4d4b59bb0" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.597147 kubelet[2293]: I1213 02:40:35.597086 2293 topology_manager.go:215] "Topology Admit Handler" podUID="5ba957b41879fcd8fff2fadeb46b2fb5" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.599742 kubelet[2293]: I1213 02:40:35.599457 2293 topology_manager.go:215] "Topology Admit Handler" podUID="f94d70ea06814fe3bf54d20dab1ddc0b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.609213 systemd[1]: Created slice kubepods-burstable-podc2baf804f9f14ef8331ed6c4d4b59bb0.slice - libcontainer container kubepods-burstable-podc2baf804f9f14ef8331ed6c4d4b59bb0.slice. Dec 13 02:40:35.627981 systemd[1]: Created slice kubepods-burstable-podf94d70ea06814fe3bf54d20dab1ddc0b.slice - libcontainer container kubepods-burstable-podf94d70ea06814fe3bf54d20dab1ddc0b.slice. Dec 13 02:40:35.642884 systemd[1]: Created slice kubepods-burstable-pod5ba957b41879fcd8fff2fadeb46b2fb5.slice - libcontainer container kubepods-burstable-pod5ba957b41879fcd8fff2fadeb46b2fb5.slice. Dec 13 02:40:35.653820 kubelet[2293]: E1213 02:40:35.653725 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7-a50b4b34f3.novalocal?timeout=10s\": dial tcp 172.24.4.28:6443: connect: connection refused" interval="400ms" Dec 13 02:40:35.750711 kubelet[2293]: I1213 02:40:35.750374 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.750711 kubelet[2293]: I1213 02:40:35.750488 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f94d70ea06814fe3bf54d20dab1ddc0b-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"f94d70ea06814fe3bf54d20dab1ddc0b\") " pod="kube-system/kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.750711 kubelet[2293]: I1213 02:40:35.750566 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2baf804f9f14ef8331ed6c4d4b59bb0-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"c2baf804f9f14ef8331ed6c4d4b59bb0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.750711 kubelet[2293]: I1213 02:40:35.750685 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2baf804f9f14ef8331ed6c4d4b59bb0-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"c2baf804f9f14ef8331ed6c4d4b59bb0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.751178 kubelet[2293]: I1213 02:40:35.750765 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2baf804f9f14ef8331ed6c4d4b59bb0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"c2baf804f9f14ef8331ed6c4d4b59bb0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.751178 kubelet[2293]: I1213 02:40:35.750829 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.751178 kubelet[2293]: I1213 02:40:35.750893 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.751178 kubelet[2293]: I1213 02:40:35.750948 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.751542 kubelet[2293]: I1213 02:40:35.751021 2293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.755743 kubelet[2293]: I1213 02:40:35.755689 2293 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.756431 kubelet[2293]: E1213 02:40:35.756223 2293 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.28:6443/api/v1/nodes\": dial tcp 172.24.4.28:6443: connect: connection refused" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:35.928138 containerd[1452]: time="2024-12-13T02:40:35.926404058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal,Uid:c2baf804f9f14ef8331ed6c4d4b59bb0,Namespace:kube-system,Attempt:0,}" Dec 13 02:40:35.944579 containerd[1452]: time="2024-12-13T02:40:35.944471580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal,Uid:f94d70ea06814fe3bf54d20dab1ddc0b,Namespace:kube-system,Attempt:0,}" Dec 13 02:40:35.948337 containerd[1452]: time="2024-12-13T02:40:35.948080398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal,Uid:5ba957b41879fcd8fff2fadeb46b2fb5,Namespace:kube-system,Attempt:0,}" Dec 13 02:40:36.055689 kubelet[2293]: E1213 02:40:36.055568 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7-a50b4b34f3.novalocal?timeout=10s\": dial tcp 172.24.4.28:6443: connect: connection refused" interval="800ms" Dec 13 02:40:36.160553 kubelet[2293]: I1213 02:40:36.160431 2293 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:36.161370 kubelet[2293]: E1213 02:40:36.161031 2293 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.28:6443/api/v1/nodes\": dial tcp 172.24.4.28:6443: connect: connection refused" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:36.307700 kubelet[2293]: W1213 02:40:36.307020 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:36.307700 kubelet[2293]: E1213 02:40:36.307154 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:36.381812 kubelet[2293]: W1213 02:40:36.381575 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:36.381812 kubelet[2293]: E1213 02:40:36.381750 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:36.856791 kubelet[2293]: E1213 02:40:36.856541 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7-a50b4b34f3.novalocal?timeout=10s\": dial tcp 172.24.4.28:6443: connect: connection refused" interval="1.6s" Dec 13 02:40:36.921265 kubelet[2293]: W1213 02:40:36.921087 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:36.921265 kubelet[2293]: E1213 02:40:36.921248 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:36.964705 kubelet[2293]: I1213 02:40:36.964225 2293 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:36.964930 kubelet[2293]: E1213 02:40:36.964736 2293 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.28:6443/api/v1/nodes\": dial tcp 172.24.4.28:6443: connect: connection refused" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:37.004391 kubelet[2293]: W1213 02:40:37.004266 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-7-a50b4b34f3.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:37.004391 kubelet[2293]: E1213 02:40:37.004393 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-7-a50b4b34f3.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:37.074976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285671858.mount: Deactivated successfully. Dec 13 02:40:37.098006 containerd[1452]: time="2024-12-13T02:40:37.097907880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:40:37.101826 containerd[1452]: time="2024-12-13T02:40:37.101773249Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:40:37.107009 containerd[1452]: time="2024-12-13T02:40:37.106770835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:40:37.109291 containerd[1452]: time="2024-12-13T02:40:37.109222479Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:40:37.111736 containerd[1452]: time="2024-12-13T02:40:37.111646001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:40:37.114678 containerd[1452]: time="2024-12-13T02:40:37.114573676Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:40:37.117332 containerd[1452]: time="2024-12-13T02:40:37.117167116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 02:40:37.120766 containerd[1452]: time="2024-12-13T02:40:37.120674636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:40:37.127303 containerd[1452]: time="2024-12-13T02:40:37.126948307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.178679867s" Dec 13 02:40:37.133075 containerd[1452]: time="2024-12-13T02:40:37.132992119Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.206213511s" Dec 13 02:40:37.142878 containerd[1452]: time="2024-12-13T02:40:37.142804949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.198116173s" Dec 13 02:40:37.211494 kubelet[2293]: E1213 02:40:37.211415 2293 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:37.442909 containerd[1452]: time="2024-12-13T02:40:37.442428794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:37.443115 containerd[1452]: time="2024-12-13T02:40:37.443014389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:37.443414 containerd[1452]: time="2024-12-13T02:40:37.443230484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:37.444541 containerd[1452]: time="2024-12-13T02:40:37.443358533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:37.447500 containerd[1452]: time="2024-12-13T02:40:37.446847638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:37.447500 containerd[1452]: time="2024-12-13T02:40:37.446899825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:37.447500 containerd[1452]: time="2024-12-13T02:40:37.446920564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:37.447500 containerd[1452]: time="2024-12-13T02:40:37.446995153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:37.450061 containerd[1452]: time="2024-12-13T02:40:37.448119556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:37.450061 containerd[1452]: time="2024-12-13T02:40:37.449887372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:37.450061 containerd[1452]: time="2024-12-13T02:40:37.449914853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:37.450061 containerd[1452]: time="2024-12-13T02:40:37.449996937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:37.484781 systemd[1]: Started cri-containerd-546e65b8f04c9a29feff1276be8c473e718ee7f7773f0cd8e4ca21b31eec64a9.scope - libcontainer container 546e65b8f04c9a29feff1276be8c473e718ee7f7773f0cd8e4ca21b31eec64a9. Dec 13 02:40:37.497811 systemd[1]: Started cri-containerd-1b5484041723a6142208a07a9aee36edab2faab7ff7b2823558a3ead2b0b3de7.scope - libcontainer container 1b5484041723a6142208a07a9aee36edab2faab7ff7b2823558a3ead2b0b3de7. Dec 13 02:40:37.500310 systemd[1]: Started cri-containerd-b5d7ea86e5a7597a59dfa35e370d286e9af23b2ce2d2988c7fdb0da85d530b7a.scope - libcontainer container b5d7ea86e5a7597a59dfa35e370d286e9af23b2ce2d2988c7fdb0da85d530b7a. Dec 13 02:40:37.556271 containerd[1452]: time="2024-12-13T02:40:37.556105783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal,Uid:c2baf804f9f14ef8331ed6c4d4b59bb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b5484041723a6142208a07a9aee36edab2faab7ff7b2823558a3ead2b0b3de7\"" Dec 13 02:40:37.581954 containerd[1452]: time="2024-12-13T02:40:37.581893618Z" level=info msg="CreateContainer within sandbox \"1b5484041723a6142208a07a9aee36edab2faab7ff7b2823558a3ead2b0b3de7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:40:37.583813 containerd[1452]: time="2024-12-13T02:40:37.583743015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal,Uid:5ba957b41879fcd8fff2fadeb46b2fb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"546e65b8f04c9a29feff1276be8c473e718ee7f7773f0cd8e4ca21b31eec64a9\"" Dec 13 02:40:37.588845 containerd[1452]: time="2024-12-13T02:40:37.588806204Z" level=info msg="CreateContainer within sandbox \"546e65b8f04c9a29feff1276be8c473e718ee7f7773f0cd8e4ca21b31eec64a9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:40:37.604854 containerd[1452]: time="2024-12-13T02:40:37.604811484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal,Uid:f94d70ea06814fe3bf54d20dab1ddc0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5d7ea86e5a7597a59dfa35e370d286e9af23b2ce2d2988c7fdb0da85d530b7a\"" Dec 13 02:40:37.608195 containerd[1452]: time="2024-12-13T02:40:37.607887266Z" level=info msg="CreateContainer within sandbox \"b5d7ea86e5a7597a59dfa35e370d286e9af23b2ce2d2988c7fdb0da85d530b7a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:40:37.667254 containerd[1452]: time="2024-12-13T02:40:37.667200343Z" level=info msg="CreateContainer within sandbox \"1b5484041723a6142208a07a9aee36edab2faab7ff7b2823558a3ead2b0b3de7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f5fc3511d9a5336643bc332103c5f096e0d7e0aa6110e539ebeae2fa445ff865\"" Dec 13 02:40:37.668010 containerd[1452]: time="2024-12-13T02:40:37.667984640Z" level=info msg="StartContainer for \"f5fc3511d9a5336643bc332103c5f096e0d7e0aa6110e539ebeae2fa445ff865\"" Dec 13 02:40:37.688474 containerd[1452]: time="2024-12-13T02:40:37.688327321Z" level=info msg="CreateContainer within sandbox \"546e65b8f04c9a29feff1276be8c473e718ee7f7773f0cd8e4ca21b31eec64a9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e24388ac9147025bb9157e09428956a9d27817db284e490f68372f303e730ec1\"" Dec 13 02:40:37.689291 containerd[1452]: time="2024-12-13T02:40:37.689246680Z" level=info msg="StartContainer for \"e24388ac9147025bb9157e09428956a9d27817db284e490f68372f303e730ec1\"" Dec 13 02:40:37.699823 systemd[1]: Started cri-containerd-f5fc3511d9a5336643bc332103c5f096e0d7e0aa6110e539ebeae2fa445ff865.scope - libcontainer container f5fc3511d9a5336643bc332103c5f096e0d7e0aa6110e539ebeae2fa445ff865. Dec 13 02:40:37.721142 containerd[1452]: time="2024-12-13T02:40:37.721009498Z" level=info msg="CreateContainer within sandbox \"b5d7ea86e5a7597a59dfa35e370d286e9af23b2ce2d2988c7fdb0da85d530b7a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f2f4eafffe05dcefe0ee34d5b14bbfadd15c8496f613db09307351fabf3cb10\"" Dec 13 02:40:37.723702 containerd[1452]: time="2024-12-13T02:40:37.723666726Z" level=info msg="StartContainer for \"9f2f4eafffe05dcefe0ee34d5b14bbfadd15c8496f613db09307351fabf3cb10\"" Dec 13 02:40:37.749125 systemd[1]: Started cri-containerd-e24388ac9147025bb9157e09428956a9d27817db284e490f68372f303e730ec1.scope - libcontainer container e24388ac9147025bb9157e09428956a9d27817db284e490f68372f303e730ec1. Dec 13 02:40:37.774149 containerd[1452]: time="2024-12-13T02:40:37.774105379Z" level=info msg="StartContainer for \"f5fc3511d9a5336643bc332103c5f096e0d7e0aa6110e539ebeae2fa445ff865\" returns successfully" Dec 13 02:40:37.776804 systemd[1]: Started cri-containerd-9f2f4eafffe05dcefe0ee34d5b14bbfadd15c8496f613db09307351fabf3cb10.scope - libcontainer container 9f2f4eafffe05dcefe0ee34d5b14bbfadd15c8496f613db09307351fabf3cb10. Dec 13 02:40:37.843891 containerd[1452]: time="2024-12-13T02:40:37.843824786Z" level=info msg="StartContainer for \"9f2f4eafffe05dcefe0ee34d5b14bbfadd15c8496f613db09307351fabf3cb10\" returns successfully" Dec 13 02:40:37.844142 containerd[1452]: time="2024-12-13T02:40:37.843847277Z" level=info msg="StartContainer for \"e24388ac9147025bb9157e09428956a9d27817db284e490f68372f303e730ec1\" returns successfully" Dec 13 02:40:38.401237 kubelet[2293]: W1213 02:40:38.401191 2293 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:38.401237 kubelet[2293]: E1213 02:40:38.401237 2293 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.28:6443: connect: connection refused Dec 13 02:40:38.458198 kubelet[2293]: E1213 02:40:38.457642 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7-a50b4b34f3.novalocal?timeout=10s\": dial tcp 172.24.4.28:6443: connect: connection refused" interval="3.2s" Dec 13 02:40:38.567249 kubelet[2293]: I1213 02:40:38.567220 2293 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:38.567526 kubelet[2293]: E1213 02:40:38.567505 2293 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.28:6443/api/v1/nodes\": dial tcp 172.24.4.28:6443: connect: connection refused" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:41.375718 kubelet[2293]: E1213 02:40:41.375594 2293 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-2-1-7-a50b4b34f3.novalocal" not found Dec 13 02:40:41.425688 kubelet[2293]: I1213 02:40:41.425552 2293 apiserver.go:52] "Watching apiserver" Dec 13 02:40:41.450050 kubelet[2293]: I1213 02:40:41.449906 2293 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:40:41.665209 kubelet[2293]: E1213 02:40:41.664991 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-7-a50b4b34f3.novalocal\" not found" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:41.753033 kubelet[2293]: E1213 02:40:41.753005 2293 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-2-1-7-a50b4b34f3.novalocal" not found Dec 13 02:40:41.770522 kubelet[2293]: I1213 02:40:41.770471 2293 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:41.841069 kubelet[2293]: I1213 02:40:41.841006 2293 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:43.789601 kubelet[2293]: W1213 02:40:43.789539 2293 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:40:44.458000 systemd[1]: Reloading requested from client PID 2564 ('systemctl') (unit session-11.scope)... Dec 13 02:40:44.458021 systemd[1]: Reloading... Dec 13 02:40:44.552632 zram_generator::config[2603]: No configuration found. Dec 13 02:40:44.753272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:40:44.878271 systemd[1]: Reloading finished in 419 ms. Dec 13 02:40:44.929556 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:40:44.938915 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:40:44.939188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:40:44.939238 systemd[1]: kubelet.service: Consumed 1.275s CPU time, 111.2M memory peak, 0B memory swap peak. Dec 13 02:40:44.953821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:40:45.288493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:40:45.303193 (kubelet)[2666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:40:45.872778 sudo[2677]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:40:45.873156 sudo[2677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 02:40:45.877078 kubelet[2666]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:40:45.877078 kubelet[2666]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:40:45.877078 kubelet[2666]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:40:45.877385 kubelet[2666]: I1213 02:40:45.877130 2666 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:40:45.881482 kubelet[2666]: I1213 02:40:45.881443 2666 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:40:45.881482 kubelet[2666]: I1213 02:40:45.881475 2666 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:40:45.881747 kubelet[2666]: I1213 02:40:45.881719 2666 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:40:45.885270 kubelet[2666]: I1213 02:40:45.884856 2666 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:40:45.888662 kubelet[2666]: I1213 02:40:45.888633 2666 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:40:45.904922 kubelet[2666]: I1213 02:40:45.904815 2666 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:40:45.905400 kubelet[2666]: I1213 02:40:45.905386 2666 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:40:45.905734 kubelet[2666]: I1213 02:40:45.905718 2666 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:40:45.906056 kubelet[2666]: I1213 02:40:45.905881 2666 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:40:45.906056 kubelet[2666]: I1213 02:40:45.905901 2666 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:40:45.906056 kubelet[2666]: I1213 02:40:45.905935 2666 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:40:45.906260 kubelet[2666]: I1213 02:40:45.906233 2666 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:40:45.906983 kubelet[2666]: I1213 02:40:45.906735 2666 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:40:45.906983 kubelet[2666]: I1213 02:40:45.906771 2666 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:40:45.906983 kubelet[2666]: I1213 02:40:45.906785 2666 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:40:45.908063 kubelet[2666]: I1213 02:40:45.908047 2666 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:40:45.908543 kubelet[2666]: I1213 02:40:45.908531 2666 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:40:45.909662 kubelet[2666]: I1213 02:40:45.909648 2666 server.go:1256] "Started kubelet" Dec 13 02:40:45.918951 kubelet[2666]: I1213 02:40:45.918919 2666 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:40:45.920929 kubelet[2666]: I1213 02:40:45.919797 2666 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:40:45.921482 kubelet[2666]: I1213 02:40:45.921259 2666 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:40:45.924187 kubelet[2666]: I1213 02:40:45.922875 2666 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:40:45.924561 kubelet[2666]: I1213 02:40:45.924547 2666 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:40:45.932638 kubelet[2666]: I1213 02:40:45.932532 2666 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:40:45.936109 kubelet[2666]: I1213 02:40:45.934963 2666 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:40:45.936109 kubelet[2666]: I1213 02:40:45.935054 2666 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:40:45.936109 kubelet[2666]: I1213 02:40:45.935529 2666 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:40:45.940299 kubelet[2666]: I1213 02:40:45.939774 2666 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:40:45.942004 kubelet[2666]: I1213 02:40:45.941984 2666 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:40:45.942544 kubelet[2666]: I1213 02:40:45.942517 2666 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:40:45.944693 kubelet[2666]: I1213 02:40:45.944665 2666 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:40:45.944693 kubelet[2666]: I1213 02:40:45.944694 2666 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:40:45.944849 kubelet[2666]: I1213 02:40:45.944713 2666 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:40:45.944849 kubelet[2666]: E1213 02:40:45.944760 2666 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:40:46.021161 kubelet[2666]: I1213 02:40:46.021128 2666 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:40:46.021161 kubelet[2666]: I1213 02:40:46.021157 2666 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:40:46.021161 kubelet[2666]: I1213 02:40:46.021180 2666 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:40:46.021386 kubelet[2666]: I1213 02:40:46.021330 2666 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:40:46.021386 kubelet[2666]: I1213 02:40:46.021352 2666 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:40:46.021386 kubelet[2666]: I1213 02:40:46.021360 2666 policy_none.go:49] "None policy: Start" Dec 13 02:40:46.022352 kubelet[2666]: I1213 02:40:46.022054 2666 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:40:46.022352 kubelet[2666]: I1213 02:40:46.022082 2666 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:40:46.022352 kubelet[2666]: I1213 02:40:46.022271 2666 state_mem.go:75] "Updated machine memory state" Dec 13 02:40:46.030480 kubelet[2666]: I1213 02:40:46.030442 2666 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:40:46.030726 kubelet[2666]: I1213 02:40:46.030703 2666 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:40:46.044967 kubelet[2666]: I1213 02:40:46.044858 2666 topology_manager.go:215] "Topology Admit Handler" podUID="c2baf804f9f14ef8331ed6c4d4b59bb0" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.044967 kubelet[2666]: I1213 02:40:46.044942 2666 topology_manager.go:215] "Topology Admit Handler" podUID="5ba957b41879fcd8fff2fadeb46b2fb5" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.045165 kubelet[2666]: I1213 02:40:46.044981 2666 topology_manager.go:215] "Topology Admit Handler" podUID="f94d70ea06814fe3bf54d20dab1ddc0b" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.050979 kubelet[2666]: I1213 02:40:46.050940 2666 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.062299 kubelet[2666]: W1213 02:40:46.061426 2666 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:40:46.062299 kubelet[2666]: W1213 02:40:46.061739 2666 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:40:46.062299 kubelet[2666]: E1213 02:40:46.061798 2666 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.070241 kubelet[2666]: W1213 02:40:46.070115 2666 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:40:46.071817 kubelet[2666]: I1213 02:40:46.070573 2666 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.071817 kubelet[2666]: I1213 02:40:46.070648 2666 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.141295 kubelet[2666]: I1213 02:40:46.140828 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2baf804f9f14ef8331ed6c4d4b59bb0-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"c2baf804f9f14ef8331ed6c4d4b59bb0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.141295 kubelet[2666]: I1213 02:40:46.140913 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.141295 kubelet[2666]: I1213 02:40:46.140942 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.141295 kubelet[2666]: I1213 02:40:46.140972 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f94d70ea06814fe3bf54d20dab1ddc0b-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"f94d70ea06814fe3bf54d20dab1ddc0b\") " pod="kube-system/kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.141295 kubelet[2666]: I1213 02:40:46.140999 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2baf804f9f14ef8331ed6c4d4b59bb0-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"c2baf804f9f14ef8331ed6c4d4b59bb0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.141528 kubelet[2666]: I1213 02:40:46.141025 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2baf804f9f14ef8331ed6c4d4b59bb0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"c2baf804f9f14ef8331ed6c4d4b59bb0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.141528 kubelet[2666]: I1213 02:40:46.141050 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.143548 kubelet[2666]: I1213 02:40:46.141726 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.143548 kubelet[2666]: I1213 02:40:46.141818 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ba957b41879fcd8fff2fadeb46b2fb5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal\" (UID: \"5ba957b41879fcd8fff2fadeb46b2fb5\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:46.916750 kubelet[2666]: I1213 02:40:46.916677 2666 apiserver.go:52] "Watching apiserver" Dec 13 02:40:46.936539 kubelet[2666]: I1213 02:40:46.936294 2666 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:40:47.085294 kubelet[2666]: W1213 02:40:47.085019 2666 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:40:47.085294 kubelet[2666]: E1213 02:40:47.085237 2666 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" Dec 13 02:40:47.199969 kubelet[2666]: I1213 02:40:47.199551 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-7-a50b4b34f3.novalocal" podStartSLOduration=4.199514634 podStartE2EDuration="4.199514634s" podCreationTimestamp="2024-12-13 02:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:40:47.197071974 +0000 UTC m=+1.418366210" watchObservedRunningTime="2024-12-13 02:40:47.199514634 +0000 UTC m=+1.420808870" Dec 13 02:40:47.303154 sudo[2677]: pam_unix(sudo:session): session closed for user root Dec 13 02:40:47.495208 kubelet[2666]: I1213 02:40:47.494959 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-7-a50b4b34f3.novalocal" podStartSLOduration=1.494913937 podStartE2EDuration="1.494913937s" podCreationTimestamp="2024-12-13 02:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:40:47.460986149 +0000 UTC m=+1.682280485" watchObservedRunningTime="2024-12-13 02:40:47.494913937 +0000 UTC m=+1.716208173" Dec 13 02:40:47.537381 kubelet[2666]: I1213 02:40:47.536159 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-7-a50b4b34f3.novalocal" podStartSLOduration=1.536112618 podStartE2EDuration="1.536112618s" podCreationTimestamp="2024-12-13 02:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:40:47.495264502 +0000 UTC m=+1.716558748" watchObservedRunningTime="2024-12-13 02:40:47.536112618 +0000 UTC m=+1.757406864" Dec 13 02:40:50.532825 sudo[1711]: pam_unix(sudo:session): session closed for user root Dec 13 02:40:50.832357 sshd[1693]: pam_unix(sshd:session): session closed for user core Dec 13 02:40:50.840265 systemd[1]: sshd@8-172.24.4.28:22-172.24.4.1:40562.service: Deactivated successfully. Dec 13 02:40:50.848965 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:40:50.849898 systemd[1]: session-11.scope: Consumed 8.523s CPU time, 187.8M memory peak, 0B memory swap peak. Dec 13 02:40:50.853965 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:40:50.857551 systemd-logind[1434]: Removed session 11. Dec 13 02:40:56.345859 kubelet[2666]: I1213 02:40:56.345723 2666 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:40:56.346357 containerd[1452]: time="2024-12-13T02:40:56.346037882Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:40:56.346890 kubelet[2666]: I1213 02:40:56.346775 2666 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:40:57.544253 kubelet[2666]: I1213 02:40:57.544160 2666 topology_manager.go:215] "Topology Admit Handler" podUID="eb42852a-2445-4a9d-85e4-4ba927796618" podNamespace="kube-system" podName="kube-proxy-62xt6" Dec 13 02:40:57.569557 systemd[1]: Created slice kubepods-besteffort-podeb42852a_2445_4a9d_85e4_4ba927796618.slice - libcontainer container kubepods-besteffort-podeb42852a_2445_4a9d_85e4_4ba927796618.slice. Dec 13 02:40:57.578785 kubelet[2666]: I1213 02:40:57.578572 2666 topology_manager.go:215] "Topology Admit Handler" podUID="7e33bc2f-21fa-4203-be74-def4b3f1347e" podNamespace="kube-system" podName="cilium-qcnbg" Dec 13 02:40:57.593058 systemd[1]: Created slice kubepods-burstable-pod7e33bc2f_21fa_4203_be74_def4b3f1347e.slice - libcontainer container kubepods-burstable-pod7e33bc2f_21fa_4203_be74_def4b3f1347e.slice. Dec 13 02:40:57.667047 kubelet[2666]: I1213 02:40:57.666584 2666 topology_manager.go:215] "Topology Admit Handler" podUID="7a15c568-ad16-4365-a519-eec385ad72b1" podNamespace="kube-system" podName="cilium-operator-5cc964979-n8vx9" Dec 13 02:40:57.667581 kubelet[2666]: I1213 02:40:57.666770 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml4l4\" (UniqueName: \"kubernetes.io/projected/eb42852a-2445-4a9d-85e4-4ba927796618-kube-api-access-ml4l4\") pod \"kube-proxy-62xt6\" (UID: \"eb42852a-2445-4a9d-85e4-4ba927796618\") " pod="kube-system/kube-proxy-62xt6" Dec 13 02:40:57.667762 kubelet[2666]: I1213 02:40:57.667748 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb42852a-2445-4a9d-85e4-4ba927796618-lib-modules\") pod \"kube-proxy-62xt6\" (UID: \"eb42852a-2445-4a9d-85e4-4ba927796618\") " pod="kube-system/kube-proxy-62xt6" Dec 13 02:40:57.667994 kubelet[2666]: I1213 02:40:57.667842 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb42852a-2445-4a9d-85e4-4ba927796618-kube-proxy\") pod \"kube-proxy-62xt6\" (UID: \"eb42852a-2445-4a9d-85e4-4ba927796618\") " pod="kube-system/kube-proxy-62xt6" Dec 13 02:40:57.668100 kubelet[2666]: I1213 02:40:57.668087 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb42852a-2445-4a9d-85e4-4ba927796618-xtables-lock\") pod \"kube-proxy-62xt6\" (UID: \"eb42852a-2445-4a9d-85e4-4ba927796618\") " pod="kube-system/kube-proxy-62xt6" Dec 13 02:40:57.682876 systemd[1]: Created slice kubepods-besteffort-pod7a15c568_ad16_4365_a519_eec385ad72b1.slice - libcontainer container kubepods-besteffort-pod7a15c568_ad16_4365_a519_eec385ad72b1.slice. Dec 13 02:40:57.769425 kubelet[2666]: I1213 02:40:57.768756 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-cgroup\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769425 kubelet[2666]: I1213 02:40:57.768824 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-hostproc\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769425 kubelet[2666]: I1213 02:40:57.768871 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-xtables-lock\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769425 kubelet[2666]: I1213 02:40:57.768897 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e33bc2f-21fa-4203-be74-def4b3f1347e-clustermesh-secrets\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769425 kubelet[2666]: I1213 02:40:57.768940 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-etc-cni-netd\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769425 kubelet[2666]: I1213 02:40:57.768966 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fsm7\" (UniqueName: \"kubernetes.io/projected/7e33bc2f-21fa-4203-be74-def4b3f1347e-kube-api-access-8fsm7\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769702 kubelet[2666]: I1213 02:40:57.769008 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-bpf-maps\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769702 kubelet[2666]: I1213 02:40:57.769031 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-lib-modules\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769702 kubelet[2666]: I1213 02:40:57.769054 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-run\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769702 kubelet[2666]: I1213 02:40:57.769075 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cni-path\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769702 kubelet[2666]: I1213 02:40:57.769105 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a15c568-ad16-4365-a519-eec385ad72b1-cilium-config-path\") pod \"cilium-operator-5cc964979-n8vx9\" (UID: \"7a15c568-ad16-4365-a519-eec385ad72b1\") " pod="kube-system/cilium-operator-5cc964979-n8vx9" Dec 13 02:40:57.769823 kubelet[2666]: I1213 02:40:57.769129 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-config-path\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769823 kubelet[2666]: I1213 02:40:57.769156 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-host-proc-sys-kernel\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769823 kubelet[2666]: I1213 02:40:57.769181 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e33bc2f-21fa-4203-be74-def4b3f1347e-hubble-tls\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769823 kubelet[2666]: I1213 02:40:57.769204 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-host-proc-sys-net\") pod \"cilium-qcnbg\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " pod="kube-system/cilium-qcnbg" Dec 13 02:40:57.769823 kubelet[2666]: I1213 02:40:57.769246 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfqpp\" (UniqueName: \"kubernetes.io/projected/7a15c568-ad16-4365-a519-eec385ad72b1-kube-api-access-sfqpp\") pod \"cilium-operator-5cc964979-n8vx9\" (UID: \"7a15c568-ad16-4365-a519-eec385ad72b1\") " pod="kube-system/cilium-operator-5cc964979-n8vx9" Dec 13 02:40:57.907652 containerd[1452]: time="2024-12-13T02:40:57.901897512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62xt6,Uid:eb42852a-2445-4a9d-85e4-4ba927796618,Namespace:kube-system,Attempt:0,}" Dec 13 02:40:57.974904 containerd[1452]: time="2024-12-13T02:40:57.974285764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:57.974904 containerd[1452]: time="2024-12-13T02:40:57.974407302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:57.974904 containerd[1452]: time="2024-12-13T02:40:57.974430554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:57.974904 containerd[1452]: time="2024-12-13T02:40:57.974551060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:57.991989 containerd[1452]: time="2024-12-13T02:40:57.991802305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-n8vx9,Uid:7a15c568-ad16-4365-a519-eec385ad72b1,Namespace:kube-system,Attempt:0,}" Dec 13 02:40:57.995874 systemd[1]: Started cri-containerd-56d389b03e67020cfbdc0e90b9495dac89cf7613a07809849578c63a8ea810e2.scope - libcontainer container 56d389b03e67020cfbdc0e90b9495dac89cf7613a07809849578c63a8ea810e2. Dec 13 02:40:58.033072 containerd[1452]: time="2024-12-13T02:40:58.033018551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62xt6,Uid:eb42852a-2445-4a9d-85e4-4ba927796618,Namespace:kube-system,Attempt:0,} returns sandbox id \"56d389b03e67020cfbdc0e90b9495dac89cf7613a07809849578c63a8ea810e2\"" Dec 13 02:40:58.038464 containerd[1452]: time="2024-12-13T02:40:58.038408989Z" level=info msg="CreateContainer within sandbox \"56d389b03e67020cfbdc0e90b9495dac89cf7613a07809849578c63a8ea810e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:40:58.053393 containerd[1452]: time="2024-12-13T02:40:58.053038688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:58.053393 containerd[1452]: time="2024-12-13T02:40:58.053098430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:58.053393 containerd[1452]: time="2024-12-13T02:40:58.053123177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:58.053393 containerd[1452]: time="2024-12-13T02:40:58.053229646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:58.075840 systemd[1]: Started cri-containerd-01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f.scope - libcontainer container 01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f. Dec 13 02:40:58.091449 containerd[1452]: time="2024-12-13T02:40:58.091114287Z" level=info msg="CreateContainer within sandbox \"56d389b03e67020cfbdc0e90b9495dac89cf7613a07809849578c63a8ea810e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cd216d39a5e36a25a0b4c285eae29e9e000ece4666b31bd64bfaca67c5d2da75\"" Dec 13 02:40:58.092829 containerd[1452]: time="2024-12-13T02:40:58.092544233Z" level=info msg="StartContainer for \"cd216d39a5e36a25a0b4c285eae29e9e000ece4666b31bd64bfaca67c5d2da75\"" Dec 13 02:40:58.138174 systemd[1]: Started cri-containerd-cd216d39a5e36a25a0b4c285eae29e9e000ece4666b31bd64bfaca67c5d2da75.scope - libcontainer container cd216d39a5e36a25a0b4c285eae29e9e000ece4666b31bd64bfaca67c5d2da75. Dec 13 02:40:58.147298 containerd[1452]: time="2024-12-13T02:40:58.147265905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-n8vx9,Uid:7a15c568-ad16-4365-a519-eec385ad72b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\"" Dec 13 02:40:58.164539 containerd[1452]: time="2024-12-13T02:40:58.163824294Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:40:58.194242 containerd[1452]: time="2024-12-13T02:40:58.194185815Z" level=info msg="StartContainer for \"cd216d39a5e36a25a0b4c285eae29e9e000ece4666b31bd64bfaca67c5d2da75\" returns successfully" Dec 13 02:40:58.198185 containerd[1452]: time="2024-12-13T02:40:58.197959648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qcnbg,Uid:7e33bc2f-21fa-4203-be74-def4b3f1347e,Namespace:kube-system,Attempt:0,}" Dec 13 02:40:58.271687 containerd[1452]: time="2024-12-13T02:40:58.271406864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:40:58.271687 containerd[1452]: time="2024-12-13T02:40:58.271471445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:40:58.271687 containerd[1452]: time="2024-12-13T02:40:58.271488176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:58.271866 containerd[1452]: time="2024-12-13T02:40:58.271599485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:40:58.298861 systemd[1]: Started cri-containerd-4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291.scope - libcontainer container 4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291. Dec 13 02:40:58.324910 containerd[1452]: time="2024-12-13T02:40:58.324866523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qcnbg,Uid:7e33bc2f-21fa-4203-be74-def4b3f1347e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\"" Dec 13 02:41:00.736598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657918706.mount: Deactivated successfully. Dec 13 02:41:02.743482 containerd[1452]: time="2024-12-13T02:41:02.743270359Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:41:02.746759 containerd[1452]: time="2024-12-13T02:41:02.746475065Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906589" Dec 13 02:41:02.748206 containerd[1452]: time="2024-12-13T02:41:02.748139461Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:41:02.750474 containerd[1452]: time="2024-12-13T02:41:02.750427970Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.58655191s" Dec 13 02:41:02.750631 containerd[1452]: time="2024-12-13T02:41:02.750558508Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:41:02.751526 containerd[1452]: time="2024-12-13T02:41:02.751321525Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:41:02.771553 containerd[1452]: time="2024-12-13T02:41:02.771515598Z" level=info msg="CreateContainer within sandbox \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:41:02.801155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792496184.mount: Deactivated successfully. Dec 13 02:41:02.814087 containerd[1452]: time="2024-12-13T02:41:02.813366268Z" level=info msg="CreateContainer within sandbox \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\"" Dec 13 02:41:02.814499 containerd[1452]: time="2024-12-13T02:41:02.814416449Z" level=info msg="StartContainer for \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\"" Dec 13 02:41:02.870781 systemd[1]: Started cri-containerd-df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8.scope - libcontainer container df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8. Dec 13 02:41:02.910692 containerd[1452]: time="2024-12-13T02:41:02.910600358Z" level=info msg="StartContainer for \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\" returns successfully" Dec 13 02:41:03.112441 kubelet[2666]: I1213 02:41:03.112059 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-62xt6" podStartSLOduration=6.112008937 podStartE2EDuration="6.112008937s" podCreationTimestamp="2024-12-13 02:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:40:59.073039651 +0000 UTC m=+13.294333957" watchObservedRunningTime="2024-12-13 02:41:03.112008937 +0000 UTC m=+17.333303183" Dec 13 02:41:05.976416 kubelet[2666]: I1213 02:41:05.976313 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-n8vx9" podStartSLOduration=4.3755271780000005 podStartE2EDuration="8.976236177s" podCreationTimestamp="2024-12-13 02:40:57 +0000 UTC" firstStartedPulling="2024-12-13 02:40:58.150286587 +0000 UTC m=+12.371580824" lastFinishedPulling="2024-12-13 02:41:02.750995577 +0000 UTC m=+16.972289823" observedRunningTime="2024-12-13 02:41:03.114275733 +0000 UTC m=+17.335569969" watchObservedRunningTime="2024-12-13 02:41:05.976236177 +0000 UTC m=+20.197530464" Dec 13 02:41:11.707562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923801922.mount: Deactivated successfully. Dec 13 02:41:16.648558 containerd[1452]: time="2024-12-13T02:41:16.648356413Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:41:16.651670 containerd[1452]: time="2024-12-13T02:41:16.651559372Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735271" Dec 13 02:41:16.652870 containerd[1452]: time="2024-12-13T02:41:16.652816476Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:41:16.656121 containerd[1452]: time="2024-12-13T02:41:16.656037038Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.904675337s" Dec 13 02:41:16.656121 containerd[1452]: time="2024-12-13T02:41:16.656104876Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:41:16.661474 containerd[1452]: time="2024-12-13T02:41:16.661399595Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:41:16.750378 containerd[1452]: time="2024-12-13T02:41:16.750287578Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\"" Dec 13 02:41:16.764862 containerd[1452]: time="2024-12-13T02:41:16.763302999Z" level=info msg="StartContainer for \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\"" Dec 13 02:41:17.062432 systemd[1]: run-containerd-runc-k8s.io-a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60-runc.yUsX8C.mount: Deactivated successfully. Dec 13 02:41:17.075747 systemd[1]: Started cri-containerd-a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60.scope - libcontainer container a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60. Dec 13 02:41:17.120062 containerd[1452]: time="2024-12-13T02:41:17.119952677Z" level=info msg="StartContainer for \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\" returns successfully" Dec 13 02:41:17.132073 systemd[1]: cri-containerd-a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60.scope: Deactivated successfully. Dec 13 02:41:17.739060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60-rootfs.mount: Deactivated successfully. Dec 13 02:41:17.872252 containerd[1452]: time="2024-12-13T02:41:17.830649299Z" level=info msg="shim disconnected" id=a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60 namespace=k8s.io Dec 13 02:41:17.873689 containerd[1452]: time="2024-12-13T02:41:17.872690528Z" level=warning msg="cleaning up after shim disconnected" id=a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60 namespace=k8s.io Dec 13 02:41:17.873689 containerd[1452]: time="2024-12-13T02:41:17.872734011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:41:18.180302 containerd[1452]: time="2024-12-13T02:41:18.180163309Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:41:18.239738 containerd[1452]: time="2024-12-13T02:41:18.239640606Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\"" Dec 13 02:41:18.241130 containerd[1452]: time="2024-12-13T02:41:18.240947994Z" level=info msg="StartContainer for \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\"" Dec 13 02:41:18.280829 systemd[1]: Started cri-containerd-04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2.scope - libcontainer container 04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2. Dec 13 02:41:18.321713 containerd[1452]: time="2024-12-13T02:41:18.321657653Z" level=info msg="StartContainer for \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\" returns successfully" Dec 13 02:41:18.333515 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:41:18.333976 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:41:18.334093 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:41:18.339017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:41:18.340383 systemd[1]: cri-containerd-04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2.scope: Deactivated successfully. Dec 13 02:41:18.378733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:41:18.382220 containerd[1452]: time="2024-12-13T02:41:18.382139737Z" level=info msg="shim disconnected" id=04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2 namespace=k8s.io Dec 13 02:41:18.382667 containerd[1452]: time="2024-12-13T02:41:18.382365985Z" level=warning msg="cleaning up after shim disconnected" id=04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2 namespace=k8s.io Dec 13 02:41:18.382667 containerd[1452]: time="2024-12-13T02:41:18.382391422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:41:18.738704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2-rootfs.mount: Deactivated successfully. Dec 13 02:41:19.172260 containerd[1452]: time="2024-12-13T02:41:19.171193923Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:41:20.088581 containerd[1452]: time="2024-12-13T02:41:20.088488016Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\"" Dec 13 02:41:20.089779 containerd[1452]: time="2024-12-13T02:41:20.089545812Z" level=info msg="StartContainer for \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\"" Dec 13 02:41:20.167868 systemd[1]: Started cri-containerd-ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11.scope - libcontainer container ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11. Dec 13 02:41:20.201918 systemd[1]: cri-containerd-ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11.scope: Deactivated successfully. Dec 13 02:41:20.206786 containerd[1452]: time="2024-12-13T02:41:20.206742225Z" level=info msg="StartContainer for \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\" returns successfully" Dec 13 02:41:20.230338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11-rootfs.mount: Deactivated successfully. Dec 13 02:41:20.249105 containerd[1452]: time="2024-12-13T02:41:20.249043946Z" level=info msg="shim disconnected" id=ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11 namespace=k8s.io Dec 13 02:41:20.249520 containerd[1452]: time="2024-12-13T02:41:20.249309627Z" level=warning msg="cleaning up after shim disconnected" id=ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11 namespace=k8s.io Dec 13 02:41:20.249520 containerd[1452]: time="2024-12-13T02:41:20.249327230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:41:21.201657 containerd[1452]: time="2024-12-13T02:41:21.201534213Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:41:21.239379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2315126840.mount: Deactivated successfully. Dec 13 02:41:21.252636 containerd[1452]: time="2024-12-13T02:41:21.252538395Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\"" Dec 13 02:41:21.253636 containerd[1452]: time="2024-12-13T02:41:21.253567837Z" level=info msg="StartContainer for \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\"" Dec 13 02:41:21.297768 systemd[1]: Started cri-containerd-f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0.scope - libcontainer container f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0. Dec 13 02:41:21.324564 systemd[1]: cri-containerd-f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0.scope: Deactivated successfully. Dec 13 02:41:21.334574 containerd[1452]: time="2024-12-13T02:41:21.334455377Z" level=info msg="StartContainer for \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\" returns successfully" Dec 13 02:41:21.357739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0-rootfs.mount: Deactivated successfully. Dec 13 02:41:21.372515 containerd[1452]: time="2024-12-13T02:41:21.372422644Z" level=info msg="shim disconnected" id=f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0 namespace=k8s.io Dec 13 02:41:21.372515 containerd[1452]: time="2024-12-13T02:41:21.372508516Z" level=warning msg="cleaning up after shim disconnected" id=f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0 namespace=k8s.io Dec 13 02:41:21.372717 containerd[1452]: time="2024-12-13T02:41:21.372520288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:41:22.209282 containerd[1452]: time="2024-12-13T02:41:22.209161670Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:41:22.659189 containerd[1452]: time="2024-12-13T02:41:22.659100112Z" level=info msg="CreateContainer within sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\"" Dec 13 02:41:22.661043 containerd[1452]: time="2024-12-13T02:41:22.660879579Z" level=info msg="StartContainer for \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\"" Dec 13 02:41:22.723213 systemd[1]: Started cri-containerd-6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa.scope - libcontainer container 6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa. Dec 13 02:41:22.826477 containerd[1452]: time="2024-12-13T02:41:22.826393110Z" level=info msg="StartContainer for \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\" returns successfully" Dec 13 02:41:23.383179 kubelet[2666]: I1213 02:41:23.382798 2666 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:41:24.375518 kubelet[2666]: I1213 02:41:24.373560 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qcnbg" podStartSLOduration=9.043310154 podStartE2EDuration="27.373477493s" podCreationTimestamp="2024-12-13 02:40:57 +0000 UTC" firstStartedPulling="2024-12-13 02:40:58.326551276 +0000 UTC m=+12.547845643" lastFinishedPulling="2024-12-13 02:41:16.656718746 +0000 UTC m=+30.878012982" observedRunningTime="2024-12-13 02:41:23.596836488 +0000 UTC m=+37.818130804" watchObservedRunningTime="2024-12-13 02:41:24.373477493 +0000 UTC m=+38.594771779" Dec 13 02:41:24.375518 kubelet[2666]: I1213 02:41:24.374092 2666 topology_manager.go:215] "Topology Admit Handler" podUID="be165cf6-760e-48e3-8ec6-585c867dfc63" podNamespace="kube-system" podName="coredns-76f75df574-8rrpb" Dec 13 02:41:24.383844 kubelet[2666]: I1213 02:41:24.382914 2666 topology_manager.go:215] "Topology Admit Handler" podUID="8272c479-155c-4286-af57-6ce3e45c949d" podNamespace="kube-system" podName="coredns-76f75df574-5rv6j" Dec 13 02:41:24.435716 systemd[1]: Created slice kubepods-burstable-podbe165cf6_760e_48e3_8ec6_585c867dfc63.slice - libcontainer container kubepods-burstable-podbe165cf6_760e_48e3_8ec6_585c867dfc63.slice. Dec 13 02:41:24.453465 systemd[1]: Created slice kubepods-burstable-pod8272c479_155c_4286_af57_6ce3e45c949d.slice - libcontainer container kubepods-burstable-pod8272c479_155c_4286_af57_6ce3e45c949d.slice. Dec 13 02:41:24.475711 kubelet[2666]: I1213 02:41:24.474044 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8272c479-155c-4286-af57-6ce3e45c949d-config-volume\") pod \"coredns-76f75df574-5rv6j\" (UID: \"8272c479-155c-4286-af57-6ce3e45c949d\") " pod="kube-system/coredns-76f75df574-5rv6j" Dec 13 02:41:24.475711 kubelet[2666]: I1213 02:41:24.474192 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-245lj\" (UniqueName: \"kubernetes.io/projected/be165cf6-760e-48e3-8ec6-585c867dfc63-kube-api-access-245lj\") pod \"coredns-76f75df574-8rrpb\" (UID: \"be165cf6-760e-48e3-8ec6-585c867dfc63\") " pod="kube-system/coredns-76f75df574-8rrpb" Dec 13 02:41:24.475711 kubelet[2666]: I1213 02:41:24.474260 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdgvb\" (UniqueName: \"kubernetes.io/projected/8272c479-155c-4286-af57-6ce3e45c949d-kube-api-access-gdgvb\") pod \"coredns-76f75df574-5rv6j\" (UID: \"8272c479-155c-4286-af57-6ce3e45c949d\") " pod="kube-system/coredns-76f75df574-5rv6j" Dec 13 02:41:24.475711 kubelet[2666]: I1213 02:41:24.474370 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be165cf6-760e-48e3-8ec6-585c867dfc63-config-volume\") pod \"coredns-76f75df574-8rrpb\" (UID: \"be165cf6-760e-48e3-8ec6-585c867dfc63\") " pod="kube-system/coredns-76f75df574-8rrpb" Dec 13 02:41:24.748076 containerd[1452]: time="2024-12-13T02:41:24.747857096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8rrpb,Uid:be165cf6-760e-48e3-8ec6-585c867dfc63,Namespace:kube-system,Attempt:0,}" Dec 13 02:41:24.769663 containerd[1452]: time="2024-12-13T02:41:24.768991876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5rv6j,Uid:8272c479-155c-4286-af57-6ce3e45c949d,Namespace:kube-system,Attempt:0,}" Dec 13 02:41:25.542364 systemd-networkd[1351]: cilium_host: Link UP Dec 13 02:41:25.545271 systemd-networkd[1351]: cilium_net: Link UP Dec 13 02:41:25.546903 systemd-networkd[1351]: cilium_net: Gained carrier Dec 13 02:41:25.550807 systemd-networkd[1351]: cilium_host: Gained carrier Dec 13 02:41:25.760897 systemd-networkd[1351]: cilium_vxlan: Link UP Dec 13 02:41:25.760918 systemd-networkd[1351]: cilium_vxlan: Gained carrier Dec 13 02:41:26.157829 systemd-networkd[1351]: cilium_host: Gained IPv6LL Dec 13 02:41:26.186644 kernel: NET: Registered PF_ALG protocol family Dec 13 02:41:26.477918 systemd-networkd[1351]: cilium_net: Gained IPv6LL Dec 13 02:41:27.154859 systemd-networkd[1351]: lxc_health: Link UP Dec 13 02:41:27.163690 systemd-networkd[1351]: lxc_health: Gained carrier Dec 13 02:41:27.181760 systemd-networkd[1351]: cilium_vxlan: Gained IPv6LL Dec 13 02:41:27.458339 systemd-networkd[1351]: lxccf18da71615a: Link UP Dec 13 02:41:27.466741 kernel: eth0: renamed from tmp9425f Dec 13 02:41:27.472085 systemd-networkd[1351]: lxccf18da71615a: Gained carrier Dec 13 02:41:27.500657 systemd-networkd[1351]: lxc1b5d044a8748: Link UP Dec 13 02:41:27.506241 kernel: eth0: renamed from tmpafa7f Dec 13 02:41:27.510454 systemd-networkd[1351]: lxc1b5d044a8748: Gained carrier Dec 13 02:41:28.461825 systemd-networkd[1351]: lxc_health: Gained IPv6LL Dec 13 02:41:29.102758 systemd-networkd[1351]: lxccf18da71615a: Gained IPv6LL Dec 13 02:41:29.294751 systemd-networkd[1351]: lxc1b5d044a8748: Gained IPv6LL Dec 13 02:41:32.420755 containerd[1452]: time="2024-12-13T02:41:32.420100651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:41:32.425401 containerd[1452]: time="2024-12-13T02:41:32.424514874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:41:32.425401 containerd[1452]: time="2024-12-13T02:41:32.425001671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:41:32.426586 containerd[1452]: time="2024-12-13T02:41:32.426403029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:41:32.447746 containerd[1452]: time="2024-12-13T02:41:32.446839944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:41:32.447746 containerd[1452]: time="2024-12-13T02:41:32.446904617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:41:32.447746 containerd[1452]: time="2024-12-13T02:41:32.446936877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:41:32.447746 containerd[1452]: time="2024-12-13T02:41:32.447029432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:41:32.476538 systemd[1]: Started cri-containerd-9425faf97fd3f020a6c2b5760f42c4270a200fc4814ddaf67b30714f85420456.scope - libcontainer container 9425faf97fd3f020a6c2b5760f42c4270a200fc4814ddaf67b30714f85420456. Dec 13 02:41:32.494019 systemd[1]: Started cri-containerd-afa7f8ab2ad141f094ab74c06f8b56206a8f9e27962f240a7c03d64818443a9c.scope - libcontainer container afa7f8ab2ad141f094ab74c06f8b56206a8f9e27962f240a7c03d64818443a9c. Dec 13 02:41:32.577594 containerd[1452]: time="2024-12-13T02:41:32.577538805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8rrpb,Uid:be165cf6-760e-48e3-8ec6-585c867dfc63,Namespace:kube-system,Attempt:0,} returns sandbox id \"9425faf97fd3f020a6c2b5760f42c4270a200fc4814ddaf67b30714f85420456\"" Dec 13 02:41:32.584041 containerd[1452]: time="2024-12-13T02:41:32.583550255Z" level=info msg="CreateContainer within sandbox \"9425faf97fd3f020a6c2b5760f42c4270a200fc4814ddaf67b30714f85420456\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:41:32.596136 containerd[1452]: time="2024-12-13T02:41:32.596091022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5rv6j,Uid:8272c479-155c-4286-af57-6ce3e45c949d,Namespace:kube-system,Attempt:0,} returns sandbox id \"afa7f8ab2ad141f094ab74c06f8b56206a8f9e27962f240a7c03d64818443a9c\"" Dec 13 02:41:32.600427 containerd[1452]: time="2024-12-13T02:41:32.600391069Z" level=info msg="CreateContainer within sandbox \"afa7f8ab2ad141f094ab74c06f8b56206a8f9e27962f240a7c03d64818443a9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:41:32.880120 containerd[1452]: time="2024-12-13T02:41:32.880006205Z" level=info msg="CreateContainer within sandbox \"afa7f8ab2ad141f094ab74c06f8b56206a8f9e27962f240a7c03d64818443a9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a78c7fe551c3723a4a952942822368cc5de19b26e50d5cb824f8b9ad7fda678e\"" Dec 13 02:41:32.881415 containerd[1452]: time="2024-12-13T02:41:32.881200254Z" level=info msg="StartContainer for \"a78c7fe551c3723a4a952942822368cc5de19b26e50d5cb824f8b9ad7fda678e\"" Dec 13 02:41:32.945996 systemd[1]: Started cri-containerd-a78c7fe551c3723a4a952942822368cc5de19b26e50d5cb824f8b9ad7fda678e.scope - libcontainer container a78c7fe551c3723a4a952942822368cc5de19b26e50d5cb824f8b9ad7fda678e. Dec 13 02:41:32.972830 containerd[1452]: time="2024-12-13T02:41:32.972788205Z" level=info msg="CreateContainer within sandbox \"9425faf97fd3f020a6c2b5760f42c4270a200fc4814ddaf67b30714f85420456\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e73a9abdb122b184cca36b24eba68dc3611310ed118fbc112ce7b8b0c36aaf9c\"" Dec 13 02:41:32.973757 containerd[1452]: time="2024-12-13T02:41:32.973730579Z" level=info msg="StartContainer for \"e73a9abdb122b184cca36b24eba68dc3611310ed118fbc112ce7b8b0c36aaf9c\"" Dec 13 02:41:33.020899 systemd[1]: Started cri-containerd-e73a9abdb122b184cca36b24eba68dc3611310ed118fbc112ce7b8b0c36aaf9c.scope - libcontainer container e73a9abdb122b184cca36b24eba68dc3611310ed118fbc112ce7b8b0c36aaf9c. Dec 13 02:41:33.061464 containerd[1452]: time="2024-12-13T02:41:33.061060496Z" level=info msg="StartContainer for \"a78c7fe551c3723a4a952942822368cc5de19b26e50d5cb824f8b9ad7fda678e\" returns successfully" Dec 13 02:41:33.082157 containerd[1452]: time="2024-12-13T02:41:33.082097516Z" level=info msg="StartContainer for \"e73a9abdb122b184cca36b24eba68dc3611310ed118fbc112ce7b8b0c36aaf9c\" returns successfully" Dec 13 02:41:33.386319 kubelet[2666]: I1213 02:41:33.386200 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5rv6j" podStartSLOduration=36.386111831 podStartE2EDuration="36.386111831s" podCreationTimestamp="2024-12-13 02:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:41:33.38087443 +0000 UTC m=+47.602168716" watchObservedRunningTime="2024-12-13 02:41:33.386111831 +0000 UTC m=+47.607406117" Dec 13 02:41:33.439447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143993933.mount: Deactivated successfully. Dec 13 02:41:34.407185 kubelet[2666]: I1213 02:41:34.407108 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8rrpb" podStartSLOduration=37.406980188 podStartE2EDuration="37.406980188s" podCreationTimestamp="2024-12-13 02:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:41:33.42315615 +0000 UTC m=+47.644450436" watchObservedRunningTime="2024-12-13 02:41:34.406980188 +0000 UTC m=+48.628274474" Dec 13 02:42:05.043230 systemd[1]: Started sshd@9-172.24.4.28:22-172.24.4.1:58266.service - OpenSSH per-connection server daemon (172.24.4.1:58266). Dec 13 02:42:06.471004 sshd[4039]: Accepted publickey for core from 172.24.4.1 port 58266 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:06.474091 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:06.489759 systemd-logind[1434]: New session 12 of user core. Dec 13 02:42:06.495469 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 02:42:08.103031 sshd[4039]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:08.111514 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:42:08.113087 systemd[1]: sshd@9-172.24.4.28:22-172.24.4.1:58266.service: Deactivated successfully. Dec 13 02:42:08.117582 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:42:08.122691 systemd-logind[1434]: Removed session 12. Dec 13 02:42:13.126245 systemd[1]: Started sshd@10-172.24.4.28:22-172.24.4.1:58270.service - OpenSSH per-connection server daemon (172.24.4.1:58270). Dec 13 02:42:14.469871 sshd[4052]: Accepted publickey for core from 172.24.4.1 port 58270 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:14.473127 sshd[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:14.489739 systemd-logind[1434]: New session 13 of user core. Dec 13 02:42:14.498014 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 02:42:15.259543 sshd[4052]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:15.308231 systemd[1]: sshd@10-172.24.4.28:22-172.24.4.1:58270.service: Deactivated successfully. Dec 13 02:42:15.313378 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:42:15.315725 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:42:15.319027 systemd-logind[1434]: Removed session 13. Dec 13 02:42:20.294376 systemd[1]: Started sshd@11-172.24.4.28:22-172.24.4.1:54196.service - OpenSSH per-connection server daemon (172.24.4.1:54196). Dec 13 02:42:21.383801 sshd[4067]: Accepted publickey for core from 172.24.4.1 port 54196 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:21.386778 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:21.398627 systemd-logind[1434]: New session 14 of user core. Dec 13 02:42:21.411871 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 02:42:22.267830 sshd[4067]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:22.273870 systemd[1]: sshd@11-172.24.4.28:22-172.24.4.1:54196.service: Deactivated successfully. Dec 13 02:42:22.279773 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:42:22.284202 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:42:22.286666 systemd-logind[1434]: Removed session 14. Dec 13 02:42:27.288271 systemd[1]: Started sshd@12-172.24.4.28:22-172.24.4.1:58254.service - OpenSSH per-connection server daemon (172.24.4.1:58254). Dec 13 02:42:28.366181 sshd[4081]: Accepted publickey for core from 172.24.4.1 port 58254 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:28.369581 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:28.382683 systemd-logind[1434]: New session 15 of user core. Dec 13 02:42:28.386931 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 02:42:29.210259 sshd[4081]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:29.217591 systemd[1]: sshd@12-172.24.4.28:22-172.24.4.1:58254.service: Deactivated successfully. Dec 13 02:42:29.219509 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:42:29.223949 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:42:29.229908 systemd[1]: Started sshd@13-172.24.4.28:22-172.24.4.1:58256.service - OpenSSH per-connection server daemon (172.24.4.1:58256). Dec 13 02:42:29.232460 systemd-logind[1434]: Removed session 15. Dec 13 02:42:30.869858 sshd[4096]: Accepted publickey for core from 172.24.4.1 port 58256 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:30.873076 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:30.885257 systemd-logind[1434]: New session 16 of user core. Dec 13 02:42:30.889989 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 02:42:31.755151 sshd[4096]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:31.772335 systemd[1]: Started sshd@14-172.24.4.28:22-172.24.4.1:58268.service - OpenSSH per-connection server daemon (172.24.4.1:58268). Dec 13 02:42:31.831524 systemd[1]: sshd@13-172.24.4.28:22-172.24.4.1:58256.service: Deactivated successfully. Dec 13 02:42:31.838132 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:42:31.840591 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:42:31.844102 systemd-logind[1434]: Removed session 16. Dec 13 02:42:32.968582 sshd[4105]: Accepted publickey for core from 172.24.4.1 port 58268 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:32.971172 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:32.983000 systemd-logind[1434]: New session 17 of user core. Dec 13 02:42:32.992928 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 02:42:33.746379 sshd[4105]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:33.754248 systemd[1]: sshd@14-172.24.4.28:22-172.24.4.1:58268.service: Deactivated successfully. Dec 13 02:42:33.759582 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:42:33.762078 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:42:33.764170 systemd-logind[1434]: Removed session 17. Dec 13 02:42:38.772274 systemd[1]: Started sshd@15-172.24.4.28:22-172.24.4.1:50038.service - OpenSSH per-connection server daemon (172.24.4.1:50038). Dec 13 02:42:39.968879 sshd[4120]: Accepted publickey for core from 172.24.4.1 port 50038 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:39.971429 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:39.980488 systemd-logind[1434]: New session 18 of user core. Dec 13 02:42:39.987938 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 02:42:40.789548 sshd[4120]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:40.794980 systemd[1]: sshd@15-172.24.4.28:22-172.24.4.1:50038.service: Deactivated successfully. Dec 13 02:42:40.800511 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:42:40.804202 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:42:40.805746 systemd-logind[1434]: Removed session 18. Dec 13 02:42:45.815223 systemd[1]: Started sshd@16-172.24.4.28:22-172.24.4.1:58532.service - OpenSSH per-connection server daemon (172.24.4.1:58532). Dec 13 02:42:47.126480 sshd[4133]: Accepted publickey for core from 172.24.4.1 port 58532 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:47.129115 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:47.138942 systemd-logind[1434]: New session 19 of user core. Dec 13 02:42:47.146945 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 02:42:47.959842 sshd[4133]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:47.972056 systemd[1]: sshd@16-172.24.4.28:22-172.24.4.1:58532.service: Deactivated successfully. Dec 13 02:42:47.976835 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:42:47.981146 systemd-logind[1434]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:42:47.989238 systemd[1]: Started sshd@17-172.24.4.28:22-172.24.4.1:58540.service - OpenSSH per-connection server daemon (172.24.4.1:58540). Dec 13 02:42:47.995602 systemd-logind[1434]: Removed session 19. Dec 13 02:42:49.208127 sshd[4148]: Accepted publickey for core from 172.24.4.1 port 58540 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:49.210933 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:49.223123 systemd-logind[1434]: New session 20 of user core. Dec 13 02:42:49.228935 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 02:42:50.589951 sshd[4148]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:50.604339 systemd[1]: sshd@17-172.24.4.28:22-172.24.4.1:58540.service: Deactivated successfully. Dec 13 02:42:50.609755 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:42:50.614055 systemd-logind[1434]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:42:50.621183 systemd[1]: Started sshd@18-172.24.4.28:22-172.24.4.1:58546.service - OpenSSH per-connection server daemon (172.24.4.1:58546). Dec 13 02:42:50.624661 systemd-logind[1434]: Removed session 20. Dec 13 02:42:51.848236 sshd[4159]: Accepted publickey for core from 172.24.4.1 port 58546 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:51.849814 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:51.857241 systemd-logind[1434]: New session 21 of user core. Dec 13 02:42:51.863785 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 02:42:54.479712 sshd[4159]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:54.505771 systemd[1]: Started sshd@19-172.24.4.28:22-172.24.4.1:58556.service - OpenSSH per-connection server daemon (172.24.4.1:58556). Dec 13 02:42:54.509806 systemd[1]: sshd@18-172.24.4.28:22-172.24.4.1:58546.service: Deactivated successfully. Dec 13 02:42:54.513290 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:42:54.519567 systemd-logind[1434]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:42:54.524044 systemd-logind[1434]: Removed session 21. Dec 13 02:42:55.850301 sshd[4177]: Accepted publickey for core from 172.24.4.1 port 58556 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:55.852954 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:55.864506 systemd-logind[1434]: New session 22 of user core. Dec 13 02:42:55.871866 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 02:42:56.923262 sshd[4177]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:56.948278 systemd[1]: sshd@19-172.24.4.28:22-172.24.4.1:58556.service: Deactivated successfully. Dec 13 02:42:56.960747 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:42:56.964260 systemd-logind[1434]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:42:56.974024 systemd[1]: Started sshd@20-172.24.4.28:22-172.24.4.1:39756.service - OpenSSH per-connection server daemon (172.24.4.1:39756). Dec 13 02:42:56.976213 systemd-logind[1434]: Removed session 22. Dec 13 02:42:58.240347 sshd[4190]: Accepted publickey for core from 172.24.4.1 port 39756 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:42:58.243173 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:42:58.252511 systemd-logind[1434]: New session 23 of user core. Dec 13 02:42:58.262914 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 02:42:59.017494 sshd[4190]: pam_unix(sshd:session): session closed for user core Dec 13 02:42:59.021991 systemd[1]: sshd@20-172.24.4.28:22-172.24.4.1:39756.service: Deactivated successfully. Dec 13 02:42:59.025015 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:42:59.026299 systemd-logind[1434]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:42:59.027890 systemd-logind[1434]: Removed session 23. Dec 13 02:43:04.038188 systemd[1]: Started sshd@21-172.24.4.28:22-172.24.4.1:39770.service - OpenSSH per-connection server daemon (172.24.4.1:39770). Dec 13 02:43:05.319428 sshd[4208]: Accepted publickey for core from 172.24.4.1 port 39770 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:43:05.322449 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:43:05.336881 systemd-logind[1434]: New session 24 of user core. Dec 13 02:43:05.345064 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 02:43:06.057139 sshd[4208]: pam_unix(sshd:session): session closed for user core Dec 13 02:43:06.065510 systemd[1]: sshd@21-172.24.4.28:22-172.24.4.1:39770.service: Deactivated successfully. Dec 13 02:43:06.072155 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:43:06.074398 systemd-logind[1434]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:43:06.077422 systemd-logind[1434]: Removed session 24. Dec 13 02:43:11.083224 systemd[1]: Started sshd@22-172.24.4.28:22-172.24.4.1:57780.service - OpenSSH per-connection server daemon (172.24.4.1:57780). Dec 13 02:43:12.467493 sshd[4221]: Accepted publickey for core from 172.24.4.1 port 57780 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:43:12.470478 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:43:12.481884 systemd-logind[1434]: New session 25 of user core. Dec 13 02:43:12.491134 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 02:43:13.221079 sshd[4221]: pam_unix(sshd:session): session closed for user core Dec 13 02:43:13.228527 systemd[1]: sshd@22-172.24.4.28:22-172.24.4.1:57780.service: Deactivated successfully. Dec 13 02:43:13.235460 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:43:13.237964 systemd-logind[1434]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:43:13.241332 systemd-logind[1434]: Removed session 25. Dec 13 02:43:18.241171 systemd[1]: Started sshd@23-172.24.4.28:22-172.24.4.1:49290.service - OpenSSH per-connection server daemon (172.24.4.1:49290). Dec 13 02:43:19.371567 sshd[4234]: Accepted publickey for core from 172.24.4.1 port 49290 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:43:19.374336 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:43:19.385950 systemd-logind[1434]: New session 26 of user core. Dec 13 02:43:19.394102 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 02:43:20.250245 sshd[4234]: pam_unix(sshd:session): session closed for user core Dec 13 02:43:20.257815 systemd[1]: sshd@23-172.24.4.28:22-172.24.4.1:49290.service: Deactivated successfully. Dec 13 02:43:20.259831 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:43:20.262450 systemd-logind[1434]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:43:20.268127 systemd[1]: Started sshd@24-172.24.4.28:22-172.24.4.1:49302.service - OpenSSH per-connection server daemon (172.24.4.1:49302). Dec 13 02:43:20.270750 systemd-logind[1434]: Removed session 26. Dec 13 02:43:21.628327 sshd[4247]: Accepted publickey for core from 172.24.4.1 port 49302 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:43:21.630918 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:43:21.639514 systemd-logind[1434]: New session 27 of user core. Dec 13 02:43:21.652959 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 02:43:24.239924 containerd[1452]: time="2024-12-13T02:43:24.239772835Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:43:24.306651 containerd[1452]: time="2024-12-13T02:43:24.306383077Z" level=info msg="StopContainer for \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\" with timeout 30 (s)" Dec 13 02:43:24.306651 containerd[1452]: time="2024-12-13T02:43:24.306494327Z" level=info msg="StopContainer for \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\" with timeout 2 (s)" Dec 13 02:43:24.312509 containerd[1452]: time="2024-12-13T02:43:24.312393718Z" level=info msg="Stop container \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\" with signal terminated" Dec 13 02:43:24.312599 containerd[1452]: time="2024-12-13T02:43:24.312496171Z" level=info msg="Stop container \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\" with signal terminated" Dec 13 02:43:24.332302 systemd-networkd[1351]: lxc_health: Link DOWN Dec 13 02:43:24.332505 systemd-networkd[1351]: lxc_health: Lost carrier Dec 13 02:43:24.349542 systemd[1]: cri-containerd-df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8.scope: Deactivated successfully. Dec 13 02:43:24.371410 systemd[1]: cri-containerd-6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa.scope: Deactivated successfully. Dec 13 02:43:24.371646 systemd[1]: cri-containerd-6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa.scope: Consumed 9.341s CPU time. Dec 13 02:43:24.388940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8-rootfs.mount: Deactivated successfully. Dec 13 02:43:24.397899 containerd[1452]: time="2024-12-13T02:43:24.397688994Z" level=info msg="shim disconnected" id=df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8 namespace=k8s.io Dec 13 02:43:24.397899 containerd[1452]: time="2024-12-13T02:43:24.397763624Z" level=warning msg="cleaning up after shim disconnected" id=df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8 namespace=k8s.io Dec 13 02:43:24.397899 containerd[1452]: time="2024-12-13T02:43:24.397774034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:43:24.409249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa-rootfs.mount: Deactivated successfully. Dec 13 02:43:24.425467 containerd[1452]: time="2024-12-13T02:43:24.425378332Z" level=info msg="shim disconnected" id=6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa namespace=k8s.io Dec 13 02:43:24.425467 containerd[1452]: time="2024-12-13T02:43:24.425468301Z" level=warning msg="cleaning up after shim disconnected" id=6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa namespace=k8s.io Dec 13 02:43:24.425467 containerd[1452]: time="2024-12-13T02:43:24.425487076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:43:24.429026 containerd[1452]: time="2024-12-13T02:43:24.428962256Z" level=info msg="StopContainer for \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\" returns successfully" Dec 13 02:43:24.430118 containerd[1452]: time="2024-12-13T02:43:24.429940457Z" level=info msg="StopPodSandbox for \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\"" Dec 13 02:43:24.430118 containerd[1452]: time="2024-12-13T02:43:24.429992646Z" level=info msg="Container to stop \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:43:24.434470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f-shm.mount: Deactivated successfully. Dec 13 02:43:24.443235 systemd[1]: cri-containerd-01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f.scope: Deactivated successfully. Dec 13 02:43:24.457006 containerd[1452]: time="2024-12-13T02:43:24.456690788Z" level=info msg="StopContainer for \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\" returns successfully" Dec 13 02:43:24.457524 containerd[1452]: time="2024-12-13T02:43:24.457361320Z" level=info msg="StopPodSandbox for \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\"" Dec 13 02:43:24.458432 containerd[1452]: time="2024-12-13T02:43:24.457539846Z" level=info msg="Container to stop \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:43:24.458432 containerd[1452]: time="2024-12-13T02:43:24.457591002Z" level=info msg="Container to stop \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:43:24.458432 containerd[1452]: time="2024-12-13T02:43:24.457645154Z" level=info msg="Container to stop \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:43:24.458432 containerd[1452]: time="2024-12-13T02:43:24.457659672Z" level=info msg="Container to stop \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:43:24.458432 containerd[1452]: time="2024-12-13T02:43:24.457671785Z" level=info msg="Container to stop \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:43:24.464192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291-shm.mount: Deactivated successfully. Dec 13 02:43:24.475933 systemd[1]: cri-containerd-4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291.scope: Deactivated successfully. Dec 13 02:43:24.494988 containerd[1452]: time="2024-12-13T02:43:24.493415950Z" level=info msg="shim disconnected" id=01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f namespace=k8s.io Dec 13 02:43:24.494988 containerd[1452]: time="2024-12-13T02:43:24.493716255Z" level=warning msg="cleaning up after shim disconnected" id=01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f namespace=k8s.io Dec 13 02:43:24.494988 containerd[1452]: time="2024-12-13T02:43:24.493737606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:43:24.512262 containerd[1452]: time="2024-12-13T02:43:24.512210579Z" level=info msg="shim disconnected" id=4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291 namespace=k8s.io Dec 13 02:43:24.513050 containerd[1452]: time="2024-12-13T02:43:24.513030903Z" level=warning msg="cleaning up after shim disconnected" id=4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291 namespace=k8s.io Dec 13 02:43:24.513123 containerd[1452]: time="2024-12-13T02:43:24.513108680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:43:24.528599 containerd[1452]: time="2024-12-13T02:43:24.528561870Z" level=info msg="TearDown network for sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" successfully" Dec 13 02:43:24.528764 containerd[1452]: time="2024-12-13T02:43:24.528746749Z" level=info msg="StopPodSandbox for \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" returns successfully" Dec 13 02:43:24.529438 containerd[1452]: time="2024-12-13T02:43:24.529148895Z" level=info msg="TearDown network for sandbox \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\" successfully" Dec 13 02:43:24.529438 containerd[1452]: time="2024-12-13T02:43:24.529219989Z" level=info msg="StopPodSandbox for \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\" returns successfully" Dec 13 02:43:24.553136 kubelet[2666]: I1213 02:43:24.553078 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-hostproc\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.553136 kubelet[2666]: I1213 02:43:24.553126 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cni-path\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.553136 kubelet[2666]: I1213 02:43:24.553164 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-config-path\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555501 kubelet[2666]: I1213 02:43:24.553194 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a15c568-ad16-4365-a519-eec385ad72b1-cilium-config-path\") pod \"7a15c568-ad16-4365-a519-eec385ad72b1\" (UID: \"7a15c568-ad16-4365-a519-eec385ad72b1\") " Dec 13 02:43:24.555501 kubelet[2666]: I1213 02:43:24.553222 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfqpp\" (UniqueName: \"kubernetes.io/projected/7a15c568-ad16-4365-a519-eec385ad72b1-kube-api-access-sfqpp\") pod \"7a15c568-ad16-4365-a519-eec385ad72b1\" (UID: \"7a15c568-ad16-4365-a519-eec385ad72b1\") " Dec 13 02:43:24.555501 kubelet[2666]: I1213 02:43:24.553247 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e33bc2f-21fa-4203-be74-def4b3f1347e-clustermesh-secrets\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555501 kubelet[2666]: I1213 02:43:24.553289 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-etc-cni-netd\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555501 kubelet[2666]: I1213 02:43:24.553316 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e33bc2f-21fa-4203-be74-def4b3f1347e-hubble-tls\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555501 kubelet[2666]: I1213 02:43:24.553339 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-host-proc-sys-kernel\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555744 kubelet[2666]: I1213 02:43:24.553361 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-host-proc-sys-net\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555744 kubelet[2666]: I1213 02:43:24.553383 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-xtables-lock\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555744 kubelet[2666]: I1213 02:43:24.553403 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-run\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555744 kubelet[2666]: I1213 02:43:24.553427 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fsm7\" (UniqueName: \"kubernetes.io/projected/7e33bc2f-21fa-4203-be74-def4b3f1347e-kube-api-access-8fsm7\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555744 kubelet[2666]: I1213 02:43:24.553450 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-cgroup\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.555744 kubelet[2666]: I1213 02:43:24.553473 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-bpf-maps\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.556311 kubelet[2666]: I1213 02:43:24.553496 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-lib-modules\") pod \"7e33bc2f-21fa-4203-be74-def4b3f1347e\" (UID: \"7e33bc2f-21fa-4203-be74-def4b3f1347e\") " Dec 13 02:43:24.563380 kubelet[2666]: I1213 02:43:24.562993 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.573531 kubelet[2666]: I1213 02:43:24.573113 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.573531 kubelet[2666]: I1213 02:43:24.573183 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.573531 kubelet[2666]: I1213 02:43:24.573220 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.574800 kubelet[2666]: I1213 02:43:24.574581 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-hostproc" (OuterVolumeSpecName: "hostproc") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.574800 kubelet[2666]: I1213 02:43:24.574657 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cni-path" (OuterVolumeSpecName: "cni-path") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.579253 kubelet[2666]: I1213 02:43:24.579199 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a15c568-ad16-4365-a519-eec385ad72b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a15c568-ad16-4365-a519-eec385ad72b1" (UID: "7a15c568-ad16-4365-a519-eec385ad72b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:43:24.580442 kubelet[2666]: I1213 02:43:24.580406 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.580645 kubelet[2666]: I1213 02:43:24.580532 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.581388 kubelet[2666]: I1213 02:43:24.553784 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.583013 kubelet[2666]: I1213 02:43:24.582698 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e33bc2f-21fa-4203-be74-def4b3f1347e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:43:24.583013 kubelet[2666]: I1213 02:43:24.582775 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e33bc2f-21fa-4203-be74-def4b3f1347e-kube-api-access-8fsm7" (OuterVolumeSpecName: "kube-api-access-8fsm7") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "kube-api-access-8fsm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:43:24.584015 kubelet[2666]: I1213 02:43:24.583898 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:43:24.588016 kubelet[2666]: I1213 02:43:24.587968 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:43:24.589047 kubelet[2666]: I1213 02:43:24.589010 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a15c568-ad16-4365-a519-eec385ad72b1-kube-api-access-sfqpp" (OuterVolumeSpecName: "kube-api-access-sfqpp") pod "7a15c568-ad16-4365-a519-eec385ad72b1" (UID: "7a15c568-ad16-4365-a519-eec385ad72b1"). InnerVolumeSpecName "kube-api-access-sfqpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:43:24.589134 kubelet[2666]: I1213 02:43:24.589105 2666 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e33bc2f-21fa-4203-be74-def4b3f1347e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7e33bc2f-21fa-4203-be74-def4b3f1347e" (UID: "7e33bc2f-21fa-4203-be74-def4b3f1347e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:43:24.654290 kubelet[2666]: I1213 02:43:24.654258 2666 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-lib-modules\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654667 kubelet[2666]: I1213 02:43:24.654439 2666 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-config-path\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654667 kubelet[2666]: I1213 02:43:24.654457 2666 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-hostproc\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654667 kubelet[2666]: I1213 02:43:24.654469 2666 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cni-path\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654667 kubelet[2666]: I1213 02:43:24.654482 2666 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a15c568-ad16-4365-a519-eec385ad72b1-cilium-config-path\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654667 kubelet[2666]: I1213 02:43:24.654495 2666 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sfqpp\" (UniqueName: \"kubernetes.io/projected/7a15c568-ad16-4365-a519-eec385ad72b1-kube-api-access-sfqpp\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654667 kubelet[2666]: I1213 02:43:24.654508 2666 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e33bc2f-21fa-4203-be74-def4b3f1347e-clustermesh-secrets\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654667 kubelet[2666]: I1213 02:43:24.654520 2666 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-etc-cni-netd\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654856 kubelet[2666]: I1213 02:43:24.654532 2666 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e33bc2f-21fa-4203-be74-def4b3f1347e-hubble-tls\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654856 kubelet[2666]: I1213 02:43:24.654544 2666 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-xtables-lock\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654856 kubelet[2666]: I1213 02:43:24.654555 2666 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-run\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654856 kubelet[2666]: I1213 02:43:24.654568 2666 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-host-proc-sys-kernel\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654856 kubelet[2666]: I1213 02:43:24.654582 2666 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-host-proc-sys-net\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654856 kubelet[2666]: I1213 02:43:24.654593 2666 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-cilium-cgroup\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.654856 kubelet[2666]: I1213 02:43:24.654636 2666 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e33bc2f-21fa-4203-be74-def4b3f1347e-bpf-maps\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.655019 kubelet[2666]: I1213 02:43:24.654651 2666 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8fsm7\" (UniqueName: \"kubernetes.io/projected/7e33bc2f-21fa-4203-be74-def4b3f1347e-kube-api-access-8fsm7\") on node \"ci-4081-2-1-7-a50b4b34f3.novalocal\" DevicePath \"\"" Dec 13 02:43:24.721592 kubelet[2666]: I1213 02:43:24.719788 2666 scope.go:117] "RemoveContainer" containerID="6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa" Dec 13 02:43:24.726884 containerd[1452]: time="2024-12-13T02:43:24.726821050Z" level=info msg="RemoveContainer for \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\"" Dec 13 02:43:24.745565 containerd[1452]: time="2024-12-13T02:43:24.745124885Z" level=info msg="RemoveContainer for \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\" returns successfully" Dec 13 02:43:24.757495 kubelet[2666]: I1213 02:43:24.757428 2666 scope.go:117] "RemoveContainer" containerID="f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0" Dec 13 02:43:24.762942 systemd[1]: Removed slice kubepods-burstable-pod7e33bc2f_21fa_4203_be74_def4b3f1347e.slice - libcontainer container kubepods-burstable-pod7e33bc2f_21fa_4203_be74_def4b3f1347e.slice. Dec 13 02:43:24.763207 systemd[1]: kubepods-burstable-pod7e33bc2f_21fa_4203_be74_def4b3f1347e.slice: Consumed 9.430s CPU time. Dec 13 02:43:24.788262 containerd[1452]: time="2024-12-13T02:43:24.787469743Z" level=info msg="RemoveContainer for \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\"" Dec 13 02:43:24.794297 systemd[1]: Removed slice kubepods-besteffort-pod7a15c568_ad16_4365_a519_eec385ad72b1.slice - libcontainer container kubepods-besteffort-pod7a15c568_ad16_4365_a519_eec385ad72b1.slice. Dec 13 02:43:24.796245 containerd[1452]: time="2024-12-13T02:43:24.795751488Z" level=info msg="RemoveContainer for \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\" returns successfully" Dec 13 02:43:24.796307 kubelet[2666]: I1213 02:43:24.796067 2666 scope.go:117] "RemoveContainer" containerID="ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11" Dec 13 02:43:24.799498 containerd[1452]: time="2024-12-13T02:43:24.799473122Z" level=info msg="RemoveContainer for \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\"" Dec 13 02:43:24.804037 containerd[1452]: time="2024-12-13T02:43:24.803851291Z" level=info msg="RemoveContainer for \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\" returns successfully" Dec 13 02:43:24.804334 kubelet[2666]: I1213 02:43:24.804291 2666 scope.go:117] "RemoveContainer" containerID="04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2" Dec 13 02:43:24.805658 containerd[1452]: time="2024-12-13T02:43:24.805592918Z" level=info msg="RemoveContainer for \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\"" Dec 13 02:43:24.810986 containerd[1452]: time="2024-12-13T02:43:24.810942556Z" level=info msg="RemoveContainer for \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\" returns successfully" Dec 13 02:43:24.811576 kubelet[2666]: I1213 02:43:24.811356 2666 scope.go:117] "RemoveContainer" containerID="a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60" Dec 13 02:43:24.814436 containerd[1452]: time="2024-12-13T02:43:24.814376608Z" level=info msg="RemoveContainer for \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\"" Dec 13 02:43:24.819115 containerd[1452]: time="2024-12-13T02:43:24.819025297Z" level=info msg="RemoveContainer for \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\" returns successfully" Dec 13 02:43:24.819521 kubelet[2666]: I1213 02:43:24.819276 2666 scope.go:117] "RemoveContainer" containerID="6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa" Dec 13 02:43:24.830393 containerd[1452]: time="2024-12-13T02:43:24.821102435Z" level=error msg="ContainerStatus for \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\": not found" Dec 13 02:43:24.843490 kubelet[2666]: E1213 02:43:24.840885 2666 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\": not found" containerID="6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa" Dec 13 02:43:24.857475 kubelet[2666]: I1213 02:43:24.857423 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa"} err="failed to get container status \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c1cf2fb3954ba1c76e7de4e195b63c67252ffde71aefe77236f11e56f1ec2fa\": not found" Dec 13 02:43:24.857818 kubelet[2666]: I1213 02:43:24.857569 2666 scope.go:117] "RemoveContainer" containerID="f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0" Dec 13 02:43:24.858340 containerd[1452]: time="2024-12-13T02:43:24.858264019Z" level=error msg="ContainerStatus for \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\": not found" Dec 13 02:43:24.858808 kubelet[2666]: E1213 02:43:24.858559 2666 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\": not found" containerID="f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0" Dec 13 02:43:24.858808 kubelet[2666]: I1213 02:43:24.858650 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0"} err="failed to get container status \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f273540563d15dba82158c5cc2d6320b8d6e9a654f49e57d3c0d40794b67cbb0\": not found" Dec 13 02:43:24.858808 kubelet[2666]: I1213 02:43:24.858664 2666 scope.go:117] "RemoveContainer" containerID="ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11" Dec 13 02:43:24.859834 containerd[1452]: time="2024-12-13T02:43:24.859545330Z" level=error msg="ContainerStatus for \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\": not found" Dec 13 02:43:24.860064 kubelet[2666]: E1213 02:43:24.860045 2666 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\": not found" containerID="ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11" Dec 13 02:43:24.860120 kubelet[2666]: I1213 02:43:24.860110 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11"} err="failed to get container status \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\": rpc error: code = NotFound desc = an error occurred when try to find container \"ceecbdcc68f36dca80f5d1397fea05a82030b3c10cf4a25ce55335bf3020cf11\": not found" Dec 13 02:43:24.860151 kubelet[2666]: I1213 02:43:24.860123 2666 scope.go:117] "RemoveContainer" containerID="04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2" Dec 13 02:43:24.860347 containerd[1452]: time="2024-12-13T02:43:24.860320559Z" level=error msg="ContainerStatus for \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\": not found" Dec 13 02:43:24.860561 kubelet[2666]: E1213 02:43:24.860542 2666 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\": not found" containerID="04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2" Dec 13 02:43:24.860621 kubelet[2666]: I1213 02:43:24.860572 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2"} err="failed to get container status \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\": rpc error: code = NotFound desc = an error occurred when try to find container \"04cd8439dac9618fa19ecb82ad93580003b6f9eab8902ee8b5c72c5b36a44ef2\": not found" Dec 13 02:43:24.860621 kubelet[2666]: I1213 02:43:24.860584 2666 scope.go:117] "RemoveContainer" containerID="a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60" Dec 13 02:43:24.860884 containerd[1452]: time="2024-12-13T02:43:24.860832022Z" level=error msg="ContainerStatus for \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\": not found" Dec 13 02:43:24.861002 kubelet[2666]: E1213 02:43:24.860983 2666 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\": not found" containerID="a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60" Dec 13 02:43:24.861046 kubelet[2666]: I1213 02:43:24.861016 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60"} err="failed to get container status \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6b0205f6ab6c449ec3f5cce51600b187717a93a69381fe1b24b7edc80193e60\": not found" Dec 13 02:43:24.861046 kubelet[2666]: I1213 02:43:24.861031 2666 scope.go:117] "RemoveContainer" containerID="df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8" Dec 13 02:43:24.862247 containerd[1452]: time="2024-12-13T02:43:24.862199345Z" level=info msg="RemoveContainer for \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\"" Dec 13 02:43:24.865730 containerd[1452]: time="2024-12-13T02:43:24.865707517Z" level=info msg="RemoveContainer for \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\" returns successfully" Dec 13 02:43:24.865945 kubelet[2666]: I1213 02:43:24.865925 2666 scope.go:117] "RemoveContainer" containerID="df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8" Dec 13 02:43:24.866170 containerd[1452]: time="2024-12-13T02:43:24.866107299Z" level=error msg="ContainerStatus for \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\": not found" Dec 13 02:43:24.866315 kubelet[2666]: E1213 02:43:24.866247 2666 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\": not found" containerID="df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8" Dec 13 02:43:24.866356 kubelet[2666]: I1213 02:43:24.866333 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8"} err="failed to get container status \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"df682fa3c7466e18ca4a717e621027ec9faff582438fc52d0bacb36a774802b8\": not found" Dec 13 02:43:25.210276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291-rootfs.mount: Deactivated successfully. Dec 13 02:43:25.210519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f-rootfs.mount: Deactivated successfully. Dec 13 02:43:25.210740 systemd[1]: var-lib-kubelet-pods-7e33bc2f\x2d21fa\x2d4203\x2dbe74\x2ddef4b3f1347e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fsm7.mount: Deactivated successfully. Dec 13 02:43:25.210902 systemd[1]: var-lib-kubelet-pods-7a15c568\x2dad16\x2d4365\x2da519\x2deec385ad72b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsfqpp.mount: Deactivated successfully. Dec 13 02:43:25.211057 systemd[1]: var-lib-kubelet-pods-7e33bc2f\x2d21fa\x2d4203\x2dbe74\x2ddef4b3f1347e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:43:25.211213 systemd[1]: var-lib-kubelet-pods-7e33bc2f\x2d21fa\x2d4203\x2dbe74\x2ddef4b3f1347e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:43:25.952595 kubelet[2666]: I1213 02:43:25.952516 2666 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7a15c568-ad16-4365-a519-eec385ad72b1" path="/var/lib/kubelet/pods/7a15c568-ad16-4365-a519-eec385ad72b1/volumes" Dec 13 02:43:25.953845 kubelet[2666]: I1213 02:43:25.953789 2666 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7e33bc2f-21fa-4203-be74-def4b3f1347e" path="/var/lib/kubelet/pods/7e33bc2f-21fa-4203-be74-def4b3f1347e/volumes" Dec 13 02:43:26.092863 kubelet[2666]: E1213 02:43:26.092767 2666 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:43:26.236163 sshd[4247]: pam_unix(sshd:session): session closed for user core Dec 13 02:43:26.249412 systemd[1]: sshd@24-172.24.4.28:22-172.24.4.1:49302.service: Deactivated successfully. Dec 13 02:43:26.253538 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 02:43:26.254450 systemd[1]: session-27.scope: Consumed 1.532s CPU time. Dec 13 02:43:26.256301 systemd-logind[1434]: Session 27 logged out. Waiting for processes to exit. Dec 13 02:43:26.267651 systemd[1]: Started sshd@25-172.24.4.28:22-172.24.4.1:33946.service - OpenSSH per-connection server daemon (172.24.4.1:33946). Dec 13 02:43:26.272315 systemd-logind[1434]: Removed session 27. Dec 13 02:43:27.446430 sshd[4410]: Accepted publickey for core from 172.24.4.1 port 33946 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:43:27.449503 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:43:27.459681 systemd-logind[1434]: New session 28 of user core. Dec 13 02:43:27.471936 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 02:43:28.723876 kubelet[2666]: I1213 02:43:28.722888 2666 topology_manager.go:215] "Topology Admit Handler" podUID="9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f" podNamespace="kube-system" podName="cilium-zlmtk" Dec 13 02:43:28.731273 kubelet[2666]: E1213 02:43:28.730449 2666 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e33bc2f-21fa-4203-be74-def4b3f1347e" containerName="cilium-agent" Dec 13 02:43:28.731273 kubelet[2666]: E1213 02:43:28.730491 2666 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a15c568-ad16-4365-a519-eec385ad72b1" containerName="cilium-operator" Dec 13 02:43:28.731273 kubelet[2666]: E1213 02:43:28.730501 2666 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e33bc2f-21fa-4203-be74-def4b3f1347e" containerName="clean-cilium-state" Dec 13 02:43:28.731273 kubelet[2666]: E1213 02:43:28.730511 2666 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e33bc2f-21fa-4203-be74-def4b3f1347e" containerName="mount-cgroup" Dec 13 02:43:28.731273 kubelet[2666]: E1213 02:43:28.730519 2666 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e33bc2f-21fa-4203-be74-def4b3f1347e" containerName="apply-sysctl-overwrites" Dec 13 02:43:28.731273 kubelet[2666]: E1213 02:43:28.730527 2666 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e33bc2f-21fa-4203-be74-def4b3f1347e" containerName="mount-bpf-fs" Dec 13 02:43:28.731273 kubelet[2666]: I1213 02:43:28.730557 2666 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a15c568-ad16-4365-a519-eec385ad72b1" containerName="cilium-operator" Dec 13 02:43:28.731273 kubelet[2666]: I1213 02:43:28.730566 2666 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e33bc2f-21fa-4203-be74-def4b3f1347e" containerName="cilium-agent" Dec 13 02:43:28.792811 kubelet[2666]: I1213 02:43:28.791594 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-clustermesh-secrets\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.792811 kubelet[2666]: I1213 02:43:28.791695 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-xtables-lock\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.792811 kubelet[2666]: I1213 02:43:28.791726 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-host-proc-sys-kernel\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.792811 kubelet[2666]: I1213 02:43:28.791758 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-cilium-ipsec-secrets\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.792811 kubelet[2666]: I1213 02:43:28.791785 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-bpf-maps\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.792811 kubelet[2666]: I1213 02:43:28.791817 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-cilium-cgroup\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793140 kubelet[2666]: I1213 02:43:28.791847 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-lib-modules\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793140 kubelet[2666]: I1213 02:43:28.791872 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-cilium-config-path\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793140 kubelet[2666]: I1213 02:43:28.791904 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-hostproc\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793140 kubelet[2666]: I1213 02:43:28.791935 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-etc-cni-netd\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793140 kubelet[2666]: I1213 02:43:28.791961 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-hubble-tls\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793140 kubelet[2666]: I1213 02:43:28.791988 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7lxl\" (UniqueName: \"kubernetes.io/projected/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-kube-api-access-v7lxl\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793283 kubelet[2666]: I1213 02:43:28.792016 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-host-proc-sys-net\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793283 kubelet[2666]: I1213 02:43:28.792040 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-cilium-run\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.793283 kubelet[2666]: I1213 02:43:28.792065 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f-cni-path\") pod \"cilium-zlmtk\" (UID: \"9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f\") " pod="kube-system/cilium-zlmtk" Dec 13 02:43:28.797809 systemd[1]: Created slice kubepods-burstable-pod9ad4cbd1_b7ac_46b7_995e_fa0a23a2c96f.slice - libcontainer container kubepods-burstable-pod9ad4cbd1_b7ac_46b7_995e_fa0a23a2c96f.slice. Dec 13 02:43:28.931733 sshd[4410]: pam_unix(sshd:session): session closed for user core Dec 13 02:43:28.946439 systemd[1]: sshd@25-172.24.4.28:22-172.24.4.1:33946.service: Deactivated successfully. Dec 13 02:43:28.949394 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 02:43:28.960073 systemd-logind[1434]: Session 28 logged out. Waiting for processes to exit. Dec 13 02:43:28.973302 systemd[1]: Started sshd@26-172.24.4.28:22-172.24.4.1:33958.service - OpenSSH per-connection server daemon (172.24.4.1:33958). Dec 13 02:43:28.975923 systemd-logind[1434]: Removed session 28. Dec 13 02:43:29.109785 containerd[1452]: time="2024-12-13T02:43:29.109734863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlmtk,Uid:9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f,Namespace:kube-system,Attempt:0,}" Dec 13 02:43:29.140413 containerd[1452]: time="2024-12-13T02:43:29.139877123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:43:29.140413 containerd[1452]: time="2024-12-13T02:43:29.140227893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:43:29.140413 containerd[1452]: time="2024-12-13T02:43:29.140262027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:43:29.141447 containerd[1452]: time="2024-12-13T02:43:29.141376835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:43:29.162889 systemd[1]: Started cri-containerd-357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148.scope - libcontainer container 357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148. Dec 13 02:43:29.187713 containerd[1452]: time="2024-12-13T02:43:29.187675752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlmtk,Uid:9ad4cbd1-b7ac-46b7-995e-fa0a23a2c96f,Namespace:kube-system,Attempt:0,} returns sandbox id \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\"" Dec 13 02:43:29.191848 containerd[1452]: time="2024-12-13T02:43:29.191713148Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:43:29.210652 containerd[1452]: time="2024-12-13T02:43:29.210264422Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34c59e01f34c7b4c6385bc0602de34887ba8a4b6b04c9a46c9eb71183a807d9a\"" Dec 13 02:43:29.211809 containerd[1452]: time="2024-12-13T02:43:29.210942798Z" level=info msg="StartContainer for \"34c59e01f34c7b4c6385bc0602de34887ba8a4b6b04c9a46c9eb71183a807d9a\"" Dec 13 02:43:29.250829 systemd[1]: Started cri-containerd-34c59e01f34c7b4c6385bc0602de34887ba8a4b6b04c9a46c9eb71183a807d9a.scope - libcontainer container 34c59e01f34c7b4c6385bc0602de34887ba8a4b6b04c9a46c9eb71183a807d9a. Dec 13 02:43:29.283060 containerd[1452]: time="2024-12-13T02:43:29.283020566Z" level=info msg="StartContainer for \"34c59e01f34c7b4c6385bc0602de34887ba8a4b6b04c9a46c9eb71183a807d9a\" returns successfully" Dec 13 02:43:29.293992 systemd[1]: cri-containerd-34c59e01f34c7b4c6385bc0602de34887ba8a4b6b04c9a46c9eb71183a807d9a.scope: Deactivated successfully. Dec 13 02:43:29.349919 containerd[1452]: time="2024-12-13T02:43:29.349776492Z" level=info msg="shim disconnected" id=34c59e01f34c7b4c6385bc0602de34887ba8a4b6b04c9a46c9eb71183a807d9a namespace=k8s.io Dec 13 02:43:29.349919 containerd[1452]: time="2024-12-13T02:43:29.349907990Z" level=warning msg="cleaning up after shim disconnected" id=34c59e01f34c7b4c6385bc0602de34887ba8a4b6b04c9a46c9eb71183a807d9a namespace=k8s.io Dec 13 02:43:29.349919 containerd[1452]: time="2024-12-13T02:43:29.349922878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:43:29.607961 kubelet[2666]: I1213 02:43:29.607875 2666 setters.go:568] "Node became not ready" node="ci-4081-2-1-7-a50b4b34f3.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:43:29Z","lastTransitionTime":"2024-12-13T02:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:43:29.813057 containerd[1452]: time="2024-12-13T02:43:29.812915391Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:43:29.834762 containerd[1452]: time="2024-12-13T02:43:29.834657767Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287\"" Dec 13 02:43:29.835784 containerd[1452]: time="2024-12-13T02:43:29.835707081Z" level=info msg="StartContainer for \"051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287\"" Dec 13 02:43:29.894797 systemd[1]: Started cri-containerd-051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287.scope - libcontainer container 051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287. Dec 13 02:43:29.942079 containerd[1452]: time="2024-12-13T02:43:29.942004962Z" level=info msg="StartContainer for \"051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287\" returns successfully" Dec 13 02:43:29.947468 systemd[1]: cri-containerd-051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287.scope: Deactivated successfully. Dec 13 02:43:29.967381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287-rootfs.mount: Deactivated successfully. Dec 13 02:43:29.979658 containerd[1452]: time="2024-12-13T02:43:29.979427870Z" level=info msg="shim disconnected" id=051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287 namespace=k8s.io Dec 13 02:43:29.979658 containerd[1452]: time="2024-12-13T02:43:29.979652182Z" level=warning msg="cleaning up after shim disconnected" id=051a63d521144eccb43df827a07f96c42dd28666e2518ae913b14eafb56ae287 namespace=k8s.io Dec 13 02:43:29.979867 containerd[1452]: time="2024-12-13T02:43:29.979667831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:43:30.186697 sshd[4428]: Accepted publickey for core from 172.24.4.1 port 33958 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:43:30.189245 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:43:30.202030 systemd-logind[1434]: New session 29 of user core. Dec 13 02:43:30.209267 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 02:43:30.694108 sshd[4428]: pam_unix(sshd:session): session closed for user core Dec 13 02:43:30.707656 systemd[1]: sshd@26-172.24.4.28:22-172.24.4.1:33958.service: Deactivated successfully. Dec 13 02:43:30.712832 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 02:43:30.715934 systemd-logind[1434]: Session 29 logged out. Waiting for processes to exit. Dec 13 02:43:30.727212 systemd[1]: Started sshd@27-172.24.4.28:22-172.24.4.1:33966.service - OpenSSH per-connection server daemon (172.24.4.1:33966). Dec 13 02:43:30.731893 systemd-logind[1434]: Removed session 29. Dec 13 02:43:30.825129 containerd[1452]: time="2024-12-13T02:43:30.824840915Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:43:30.870414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143454126.mount: Deactivated successfully. Dec 13 02:43:30.877108 containerd[1452]: time="2024-12-13T02:43:30.876831315Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af\"" Dec 13 02:43:30.880548 containerd[1452]: time="2024-12-13T02:43:30.879601397Z" level=info msg="StartContainer for \"e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af\"" Dec 13 02:43:30.930763 systemd[1]: Started cri-containerd-e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af.scope - libcontainer container e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af. Dec 13 02:43:31.030095 containerd[1452]: time="2024-12-13T02:43:31.029957865Z" level=info msg="StartContainer for \"e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af\" returns successfully" Dec 13 02:43:31.038489 systemd[1]: cri-containerd-e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af.scope: Deactivated successfully. Dec 13 02:43:31.068314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af-rootfs.mount: Deactivated successfully. Dec 13 02:43:31.079990 containerd[1452]: time="2024-12-13T02:43:31.079772188Z" level=info msg="shim disconnected" id=e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af namespace=k8s.io Dec 13 02:43:31.079990 containerd[1452]: time="2024-12-13T02:43:31.079857629Z" level=warning msg="cleaning up after shim disconnected" id=e496ff67d56f041cb6e9f288afcfc6bb8e56db0cd5b66d7313166f40c8e4e7af namespace=k8s.io Dec 13 02:43:31.079990 containerd[1452]: time="2024-12-13T02:43:31.079911630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:43:31.096116 kubelet[2666]: E1213 02:43:31.096026 2666 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:43:31.832057 containerd[1452]: time="2024-12-13T02:43:31.831959608Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:43:31.867500 containerd[1452]: time="2024-12-13T02:43:31.867367017Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d\"" Dec 13 02:43:31.870233 containerd[1452]: time="2024-12-13T02:43:31.870138481Z" level=info msg="StartContainer for \"1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d\"" Dec 13 02:43:31.892796 sshd[4598]: Accepted publickey for core from 172.24.4.1 port 33966 ssh2: RSA SHA256:s+jMJkc8yzesvkj+g1MqwY5XQAL52YjwOYy7JiKKino Dec 13 02:43:31.897970 sshd[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:43:31.915756 systemd-logind[1434]: New session 30 of user core. Dec 13 02:43:31.922554 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 02:43:31.934930 systemd[1]: Started cri-containerd-1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d.scope - libcontainer container 1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d. Dec 13 02:43:31.969417 systemd[1]: cri-containerd-1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d.scope: Deactivated successfully. Dec 13 02:43:31.975376 containerd[1452]: time="2024-12-13T02:43:31.975268467Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ad4cbd1_b7ac_46b7_995e_fa0a23a2c96f.slice/cri-containerd-1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d.scope/memory.events\": no such file or directory" Dec 13 02:43:31.978473 containerd[1452]: time="2024-12-13T02:43:31.978430916Z" level=info msg="StartContainer for \"1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d\" returns successfully" Dec 13 02:43:31.999182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d-rootfs.mount: Deactivated successfully. Dec 13 02:43:32.008101 containerd[1452]: time="2024-12-13T02:43:32.007897459Z" level=info msg="shim disconnected" id=1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d namespace=k8s.io Dec 13 02:43:32.008101 containerd[1452]: time="2024-12-13T02:43:32.007960197Z" level=warning msg="cleaning up after shim disconnected" id=1bc3f13eb6371c0037f3ed100da98fe6b702dbee2edabd795ce51e9a99539f3d namespace=k8s.io Dec 13 02:43:32.008101 containerd[1452]: time="2024-12-13T02:43:32.007970606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:43:32.839161 containerd[1452]: time="2024-12-13T02:43:32.839036324Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:43:32.898341 containerd[1452]: time="2024-12-13T02:43:32.897968556Z" level=info msg="CreateContainer within sandbox \"357402b5a4a6c3efa8f2f6079663affc0e743353d1a8c2d823cf62e1456ea148\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039\"" Dec 13 02:43:32.901780 containerd[1452]: time="2024-12-13T02:43:32.901693413Z" level=info msg="StartContainer for \"cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039\"" Dec 13 02:43:32.954080 systemd[1]: run-containerd-runc-k8s.io-cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039-runc.9oXP3i.mount: Deactivated successfully. Dec 13 02:43:32.963812 systemd[1]: Started cri-containerd-cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039.scope - libcontainer container cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039. Dec 13 02:43:33.011024 containerd[1452]: time="2024-12-13T02:43:33.010962615Z" level=info msg="StartContainer for \"cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039\" returns successfully" Dec 13 02:43:33.707038 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:43:33.764660 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 02:43:33.802655 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Dec 13 02:43:33.819644 kernel: DRBG: Continuing without Jitter RNG Dec 13 02:43:34.984634 systemd[1]: run-containerd-runc-k8s.io-cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039-runc.pDJiGC.mount: Deactivated successfully. Dec 13 02:43:37.114712 systemd-networkd[1351]: lxc_health: Link UP Dec 13 02:43:37.121506 systemd-networkd[1351]: lxc_health: Gained carrier Dec 13 02:43:37.269932 kubelet[2666]: I1213 02:43:37.269886 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zlmtk" podStartSLOduration=9.269819864 podStartE2EDuration="9.269819864s" podCreationTimestamp="2024-12-13 02:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:43:33.870017975 +0000 UTC m=+168.091312211" watchObservedRunningTime="2024-12-13 02:43:37.269819864 +0000 UTC m=+171.491114100" Dec 13 02:43:37.387229 systemd[1]: run-containerd-runc-k8s.io-cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039-runc.Lx4fTg.mount: Deactivated successfully. Dec 13 02:43:37.548715 kubelet[2666]: E1213 02:43:37.548657 2666 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39290->127.0.0.1:40619: write tcp 127.0.0.1:39290->127.0.0.1:40619: write: connection reset by peer Dec 13 02:43:38.446805 systemd-networkd[1351]: lxc_health: Gained IPv6LL Dec 13 02:43:39.895959 systemd[1]: run-containerd-runc-k8s.io-cac6b0ddf17f18e59adb858c0038f5307cf5197fc2254d3fec81ab3e48d76039-runc.gYN6go.mount: Deactivated successfully. Dec 13 02:43:44.706334 sshd[4598]: pam_unix(sshd:session): session closed for user core Dec 13 02:43:44.712919 systemd-logind[1434]: Session 30 logged out. Waiting for processes to exit. Dec 13 02:43:44.714452 systemd[1]: sshd@27-172.24.4.28:22-172.24.4.1:33966.service: Deactivated successfully. Dec 13 02:43:44.720397 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 02:43:44.725878 systemd-logind[1434]: Removed session 30. Dec 13 02:43:46.049498 containerd[1452]: time="2024-12-13T02:43:46.049066955Z" level=info msg="StopPodSandbox for \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\"" Dec 13 02:43:46.051494 containerd[1452]: time="2024-12-13T02:43:46.049689766Z" level=info msg="TearDown network for sandbox \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\" successfully" Dec 13 02:43:46.051494 containerd[1452]: time="2024-12-13T02:43:46.049733719Z" level=info msg="StopPodSandbox for \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\" returns successfully" Dec 13 02:43:46.051494 containerd[1452]: time="2024-12-13T02:43:46.050443183Z" level=info msg="RemovePodSandbox for \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\"" Dec 13 02:43:46.056971 containerd[1452]: time="2024-12-13T02:43:46.056846712Z" level=info msg="Forcibly stopping sandbox \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\"" Dec 13 02:43:46.057154 containerd[1452]: time="2024-12-13T02:43:46.057061406Z" level=info msg="TearDown network for sandbox \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\" successfully" Dec 13 02:43:46.064791 containerd[1452]: time="2024-12-13T02:43:46.064699816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:43:46.064964 containerd[1452]: time="2024-12-13T02:43:46.064842355Z" level=info msg="RemovePodSandbox \"01770fbcae4d31519dd8191018bfcaa3773aa8338d2315f438770210612f3a0f\" returns successfully" Dec 13 02:43:46.065681 containerd[1452]: time="2024-12-13T02:43:46.065639514Z" level=info msg="StopPodSandbox for \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\"" Dec 13 02:43:46.066378 containerd[1452]: time="2024-12-13T02:43:46.066016241Z" level=info msg="TearDown network for sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" successfully" Dec 13 02:43:46.066378 containerd[1452]: time="2024-12-13T02:43:46.066050647Z" level=info msg="StopPodSandbox for \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" returns successfully" Dec 13 02:43:46.066697 containerd[1452]: time="2024-12-13T02:43:46.066600260Z" level=info msg="RemovePodSandbox for \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\"" Dec 13 02:43:46.066780 containerd[1452]: time="2024-12-13T02:43:46.066710017Z" level=info msg="Forcibly stopping sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\"" Dec 13 02:43:46.066900 containerd[1452]: time="2024-12-13T02:43:46.066843817Z" level=info msg="TearDown network for sandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" successfully" Dec 13 02:43:46.072836 containerd[1452]: time="2024-12-13T02:43:46.072716809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:43:46.073225 containerd[1452]: time="2024-12-13T02:43:46.072834271Z" level=info msg="RemovePodSandbox \"4b2fc516861345d098e2893b77f881981d963c7d6a6daee2faf9dcad7c30c291\" returns successfully"