Feb 13 15:38:49.128495 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025 Feb 13 15:38:49.132597 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:38:49.132622 kernel: BIOS-provided physical RAM map: Feb 13 15:38:49.132637 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 15:38:49.132651 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 15:38:49.132665 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 15:38:49.132692 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 15:38:49.132708 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 15:38:49.132732 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 15:38:49.132748 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 15:38:49.132764 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 15:38:49.132779 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 15:38:49.132794 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 15:38:49.132810 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 15:38:49.132835 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 15:38:49.132853 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 15:38:49.132870 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 15:38:49.132887 kernel: NX (Execute Disable) protection: active Feb 13 15:38:49.132904 kernel: APIC: Static calls initialized Feb 13 15:38:49.132921 kernel: efi: EFI v2.7 by EDK II Feb 13 15:38:49.132937 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 15:38:49.132952 kernel: random: crng init done Feb 13 15:38:49.132969 kernel: secureboot: Secure boot disabled Feb 13 15:38:49.132987 kernel: SMBIOS 2.4 present. Feb 13 15:38:49.133009 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 15:38:49.133026 kernel: Hypervisor detected: KVM Feb 13 15:38:49.133042 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:38:49.133061 kernel: kvm-clock: using sched offset of 13500189368 cycles Feb 13 15:38:49.133079 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:38:49.133097 kernel: tsc: Detected 2299.998 MHz processor Feb 13 15:38:49.133115 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:38:49.133134 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:38:49.133151 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 15:38:49.133169 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 15:38:49.133192 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:38:49.133210 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 15:38:49.133227 kernel: Using GB pages for direct mapping Feb 13 15:38:49.133245 kernel: ACPI: Early table checksum verification disabled Feb 13 15:38:49.133263 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 15:38:49.133282 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 15:38:49.133307 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 15:38:49.133330 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 15:38:49.133349 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 15:38:49.133368 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 15:38:49.133387 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 15:38:49.133406 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 15:38:49.133425 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 15:38:49.133444 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 15:38:49.133466 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 15:38:49.133485 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 15:38:49.133504 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 15:38:49.133523 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 15:38:49.133634 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 15:38:49.133654 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 15:38:49.133672 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 15:38:49.133698 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 15:38:49.133718 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 15:38:49.133741 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 15:38:49.133760 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:38:49.133779 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:38:49.133798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 15:38:49.133816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 15:38:49.133835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 15:38:49.133854 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 15:38:49.133873 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 15:38:49.133892 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 13 15:38:49.133915 kernel: Zone ranges: Feb 13 15:38:49.133933 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:38:49.133952 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:38:49.133971 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:38:49.133990 kernel: Movable zone start for each node Feb 13 15:38:49.134008 kernel: Early memory node ranges Feb 13 15:38:49.134027 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 15:38:49.134046 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 15:38:49.134065 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 15:38:49.134088 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 15:38:49.134107 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 15:38:49.134126 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:38:49.134145 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 15:38:49.134164 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:38:49.134183 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 15:38:49.134202 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 15:38:49.134221 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 15:38:49.134240 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:38:49.134263 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 15:38:49.134282 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:38:49.134301 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:38:49.134319 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:38:49.134338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:38:49.134358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:38:49.134377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:38:49.134394 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:38:49.134412 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:38:49.134435 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:38:49.134451 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 15:38:49.134466 kernel: Booting paravirtualized kernel on KVM Feb 13 15:38:49.134483 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:38:49.134501 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:38:49.134519 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:38:49.136596 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:38:49.136629 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:38:49.136647 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:38:49.136673 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:38:49.136704 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:38:49.136723 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:38:49.136740 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:38:49.136759 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:38:49.136777 kernel: Fallback order for Node 0: 0 Feb 13 15:38:49.136794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 15:38:49.136812 kernel: Policy zone: Normal Feb 13 15:38:49.136835 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:38:49.136852 kernel: software IO TLB: area num 2. Feb 13 15:38:49.136871 kernel: Memory: 7511320K/7860552K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 348976K reserved, 0K cma-reserved) Feb 13 15:38:49.136887 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:38:49.136906 kernel: Kernel/User page tables isolation: enabled Feb 13 15:38:49.136923 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 15:38:49.136940 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:38:49.136958 kernel: Dynamic Preempt: voluntary Feb 13 15:38:49.136995 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:38:49.137015 kernel: rcu: RCU event tracing is enabled. Feb 13 15:38:49.137034 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:38:49.137053 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:38:49.137076 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:38:49.137096 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:38:49.137114 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:38:49.137133 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:38:49.137153 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:38:49.137177 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:38:49.137195 kernel: Console: colour dummy device 80x25 Feb 13 15:38:49.137214 kernel: printk: console [ttyS0] enabled Feb 13 15:38:49.137232 kernel: ACPI: Core revision 20230628 Feb 13 15:38:49.137251 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:38:49.137270 kernel: x2apic enabled Feb 13 15:38:49.137289 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:38:49.137307 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 15:38:49.137325 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:38:49.137349 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 15:38:49.137367 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 15:38:49.137386 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 15:38:49.137411 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:38:49.137428 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 15:38:49.137445 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 15:38:49.137463 kernel: Spectre V2 : Mitigation: IBRS Feb 13 15:38:49.137481 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:38:49.137498 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:38:49.137520 kernel: RETBleed: Mitigation: IBRS Feb 13 15:38:49.137558 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:38:49.137576 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 15:38:49.137592 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:38:49.137608 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 15:38:49.137626 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:38:49.137643 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:38:49.137661 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:38:49.137687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:38:49.137710 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:38:49.137728 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 15:38:49.137746 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:38:49.137763 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:38:49.137781 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:38:49.137799 kernel: landlock: Up and running. Feb 13 15:38:49.137818 kernel: SELinux: Initializing. Feb 13 15:38:49.137838 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:38:49.137858 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:38:49.137882 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 15:38:49.137902 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:49.137921 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:49.137940 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:49.137959 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 15:38:49.137979 kernel: signal: max sigframe size: 1776 Feb 13 15:38:49.137998 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:38:49.138018 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:38:49.138041 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:38:49.138060 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:38:49.138079 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:38:49.138098 kernel: .... node #0, CPUs: #1 Feb 13 15:38:49.138120 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:38:49.138140 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:38:49.138159 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:38:49.138178 kernel: smpboot: Max logical packages: 1 Feb 13 15:38:49.138198 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 15:38:49.138221 kernel: devtmpfs: initialized Feb 13 15:38:49.138239 kernel: x86/mm: Memory block size: 128MB Feb 13 15:38:49.138257 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 15:38:49.138274 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:38:49.138293 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:38:49.138311 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:38:49.138328 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:38:49.138346 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:38:49.138363 kernel: audit: type=2000 audit(1739461128.106:1): state=initialized audit_enabled=0 res=1 Feb 13 15:38:49.138384 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:38:49.138402 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:38:49.138419 kernel: cpuidle: using governor menu Feb 13 15:38:49.138437 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:38:49.138455 kernel: dca service started, version 1.12.1 Feb 13 15:38:49.138473 kernel: PCI: Using configuration type 1 for base access Feb 13 15:38:49.138491 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:38:49.138510 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:38:49.140576 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:38:49.140616 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:38:49.140636 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:38:49.140655 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:38:49.140681 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:38:49.140700 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:38:49.140719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:38:49.140737 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:38:49.140756 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:38:49.140775 kernel: ACPI: Interpreter enabled Feb 13 15:38:49.140798 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:38:49.140816 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:38:49.140836 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:38:49.140855 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:38:49.140874 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:38:49.140894 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:38:49.141192 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:38:49.141403 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:38:49.141628 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:38:49.141650 kernel: PCI host bridge to bus 0000:00 Feb 13 15:38:49.141856 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:38:49.142033 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:38:49.142206 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:38:49.142379 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 15:38:49.142595 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:38:49.142813 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:38:49.143015 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 15:38:49.143211 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 15:38:49.143400 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:38:49.143631 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 15:38:49.143834 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 15:38:49.144031 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 15:38:49.144234 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:38:49.144425 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 15:38:49.144709 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 15:38:49.144914 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:38:49.145100 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 15:38:49.145292 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 15:38:49.145315 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:38:49.145334 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:38:49.145353 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:38:49.145371 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:38:49.145389 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:38:49.145408 kernel: iommu: Default domain type: Translated Feb 13 15:38:49.145426 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:38:49.145444 kernel: efivars: Registered efivars operations Feb 13 15:38:49.145467 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:38:49.145485 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:38:49.145504 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 15:38:49.145522 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 15:38:49.145556 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 15:38:49.145574 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 15:38:49.145591 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 15:38:49.145608 kernel: vgaarb: loaded Feb 13 15:38:49.145625 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:38:49.145644 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:38:49.145667 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:38:49.145694 kernel: pnp: PnP ACPI init Feb 13 15:38:49.145712 kernel: pnp: PnP ACPI: found 7 devices Feb 13 15:38:49.145731 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:38:49.145751 kernel: NET: Registered PF_INET protocol family Feb 13 15:38:49.145769 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:38:49.145788 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:38:49.145806 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:38:49.145825 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:38:49.145848 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:38:49.145866 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:38:49.145884 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:38:49.145903 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:38:49.145921 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:38:49.145939 kernel: NET: Registered PF_XDP protocol family Feb 13 15:38:49.146111 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:38:49.146275 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:38:49.146441 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:38:49.146615 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 15:38:49.146808 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:38:49.146832 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:38:49.146849 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:38:49.146867 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 15:38:49.146884 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:38:49.146910 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:38:49.146928 kernel: clocksource: Switched to clocksource tsc Feb 13 15:38:49.146948 kernel: Initialise system trusted keyrings Feb 13 15:38:49.146966 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:38:49.146984 kernel: Key type asymmetric registered Feb 13 15:38:49.147002 kernel: Asymmetric key parser 'x509' registered Feb 13 15:38:49.147022 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:38:49.147039 kernel: io scheduler mq-deadline registered Feb 13 15:38:49.147057 kernel: io scheduler kyber registered Feb 13 15:38:49.147075 kernel: io scheduler bfq registered Feb 13 15:38:49.147098 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:38:49.147117 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 15:38:49.147326 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 15:38:49.147352 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 15:38:49.147571 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 15:38:49.147597 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 15:38:49.147797 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 15:38:49.147822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:38:49.147847 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:38:49.147867 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:38:49.147887 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 15:38:49.147906 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 15:38:49.148100 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 15:38:49.148127 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:38:49.148145 kernel: i8042: Warning: Keylock active Feb 13 15:38:49.148164 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:38:49.148188 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:38:49.148377 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:38:49.150606 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:38:49.150829 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:38:48 UTC (1739461128) Feb 13 15:38:49.151008 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:38:49.151033 kernel: intel_pstate: CPU model not supported Feb 13 15:38:49.151054 kernel: pstore: Using crash dump compression: deflate Feb 13 15:38:49.151072 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:38:49.151098 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:38:49.151115 kernel: Segment Routing with IPv6 Feb 13 15:38:49.151133 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:38:49.151149 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:38:49.151163 kernel: Key type dns_resolver registered Feb 13 15:38:49.151179 kernel: IPI shorthand broadcast: enabled Feb 13 15:38:49.151195 kernel: sched_clock: Marking stable (886004306, 160904595)->(1088021563, -41112662) Feb 13 15:38:49.151212 kernel: registered taskstats version 1 Feb 13 15:38:49.151229 kernel: Loading compiled-in X.509 certificates Feb 13 15:38:49.151253 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21' Feb 13 15:38:49.151272 kernel: Key type .fscrypt registered Feb 13 15:38:49.151288 kernel: Key type fscrypt-provisioning registered Feb 13 15:38:49.151306 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:38:49.151324 kernel: ima: No architecture policies found Feb 13 15:38:49.151341 kernel: clk: Disabling unused clocks Feb 13 15:38:49.151360 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 15:38:49.151380 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:38:49.151399 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 15:38:49.151421 kernel: Run /init as init process Feb 13 15:38:49.151437 kernel: with arguments: Feb 13 15:38:49.151455 kernel: /init Feb 13 15:38:49.151471 kernel: with environment: Feb 13 15:38:49.151489 kernel: HOME=/ Feb 13 15:38:49.151506 kernel: TERM=linux Feb 13 15:38:49.151522 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:38:49.151556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:38:49.151575 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:38:49.151605 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:38:49.151626 systemd[1]: Detected virtualization google. Feb 13 15:38:49.151645 systemd[1]: Detected architecture x86-64. Feb 13 15:38:49.151663 systemd[1]: Running in initrd. Feb 13 15:38:49.151695 systemd[1]: No hostname configured, using default hostname. Feb 13 15:38:49.151715 systemd[1]: Hostname set to . Feb 13 15:38:49.151734 systemd[1]: Initializing machine ID from random generator. Feb 13 15:38:49.151757 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:38:49.151776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:49.151795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:49.151817 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:38:49.151836 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:38:49.151855 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:38:49.151881 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:38:49.151918 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:38:49.151943 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:38:49.151963 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:49.151983 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:49.152003 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:38:49.152027 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:38:49.152046 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:38:49.152066 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:38:49.152087 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:49.152106 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:49.152127 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:38:49.152147 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:38:49.152167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:49.152187 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:49.152211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:49.152231 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:38:49.152251 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:38:49.152271 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:38:49.152291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:38:49.152311 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:38:49.152331 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:38:49.152351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:38:49.152371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:49.152400 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:49.152466 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 15:38:49.152511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:49.154610 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:38:49.154663 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:38:49.154695 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:49.154719 systemd-journald[184]: Journal started Feb 13 15:38:49.154769 systemd-journald[184]: Runtime Journal (/run/log/journal/eb0b26955c194524bc7c552b26fec09f) is 8M, max 148.6M, 140.6M free. Feb 13 15:38:49.116439 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 15:38:49.167689 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:38:49.164409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:49.177568 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:38:49.180257 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 15:38:49.183686 kernel: Bridge firewalling registered Feb 13 15:38:49.182627 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:49.192758 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:49.201757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:49.205783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:38:49.209779 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:38:49.221932 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:49.240047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:49.245431 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:49.253007 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:49.263789 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:38:49.269596 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:38:49.299355 dracut-cmdline[217]: dracut-dracut-053 Feb 13 15:38:49.303880 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:38:49.333415 systemd-resolved[218]: Positive Trust Anchors: Feb 13 15:38:49.333439 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:38:49.333513 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:38:49.339260 systemd-resolved[218]: Defaulting to hostname 'linux'. Feb 13 15:38:49.341289 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:38:49.354957 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:49.412579 kernel: SCSI subsystem initialized Feb 13 15:38:49.423586 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:38:49.435586 kernel: iscsi: registered transport (tcp) Feb 13 15:38:49.459582 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:38:49.459671 kernel: QLogic iSCSI HBA Driver Feb 13 15:38:49.511261 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:49.517744 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:38:49.592355 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:38:49.592454 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:38:49.592483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:38:49.649612 kernel: raid6: avx2x4 gen() 17737 MB/s Feb 13 15:38:49.670610 kernel: raid6: avx2x2 gen() 18076 MB/s Feb 13 15:38:49.696762 kernel: raid6: avx2x1 gen() 13932 MB/s Feb 13 15:38:49.696865 kernel: raid6: using algorithm avx2x2 gen() 18076 MB/s Feb 13 15:38:49.723783 kernel: raid6: .... xor() 18823 MB/s, rmw enabled Feb 13 15:38:49.723890 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:38:49.753584 kernel: xor: automatically using best checksumming function avx Feb 13 15:38:49.923609 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:38:49.936704 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:49.941768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:49.980781 systemd-udevd[401]: Using default interface naming scheme 'v255'. Feb 13 15:38:49.989646 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:50.018768 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:38:50.059948 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Feb 13 15:38:50.096547 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:50.120774 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:38:50.229061 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:50.271325 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:38:50.325783 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:50.352687 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:38:50.343434 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:50.388579 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:38:50.388951 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 15:38:50.396626 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:50.465320 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:38:50.465493 kernel: AES CTR mode by8 optimization enabled Feb 13 15:38:50.468696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:38:50.484938 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:38:50.530953 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 15:38:50.592413 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 15:38:50.592783 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 15:38:50.593017 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 15:38:50.593250 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:38:50.593478 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:38:50.593504 kernel: GPT:17805311 != 25165823 Feb 13 15:38:50.593554 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:38:50.593578 kernel: GPT:17805311 != 25165823 Feb 13 15:38:50.593600 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:38:50.593623 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:38:50.593649 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 15:38:50.530363 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:50.530579 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:50.594204 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:50.615907 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:50.677856 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (457) Feb 13 15:38:50.677902 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (453) Feb 13 15:38:50.616181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:50.659192 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:50.692170 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:50.708456 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:38:50.709383 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:50.768263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:50.793612 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 15:38:50.825005 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 15:38:50.835391 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 15:38:50.855706 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 15:38:50.889247 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:38:50.908758 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:38:50.919717 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:50.955003 disk-uuid[541]: Primary Header is updated. Feb 13 15:38:50.955003 disk-uuid[541]: Secondary Entries is updated. Feb 13 15:38:50.955003 disk-uuid[541]: Secondary Header is updated. Feb 13 15:38:50.978888 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:38:50.994241 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:51.021710 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:38:52.015148 disk-uuid[542]: The operation has completed successfully. Feb 13 15:38:52.024687 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:38:52.106350 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:38:52.106517 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:38:52.174816 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:38:52.201647 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:38:52.201846 sh[565]: Success Feb 13 15:38:52.297197 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:38:52.304165 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:38:52.329148 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:38:52.379004 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85 Feb 13 15:38:52.379101 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:52.379133 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:38:52.388447 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:38:52.395419 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:38:52.426591 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:38:52.431235 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:38:52.432196 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:38:52.437767 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:38:52.493906 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:52.493991 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:52.494020 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:38:52.512402 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:38:52.512493 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:38:52.513924 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:38:52.548717 kernel: BTRFS info (device sda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:52.530594 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:38:52.563023 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:38:52.585862 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:38:52.648937 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:52.668819 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:38:52.756380 systemd-networkd[750]: lo: Link UP Feb 13 15:38:52.756972 systemd-networkd[750]: lo: Gained carrier Feb 13 15:38:52.760030 systemd-networkd[750]: Enumeration completed Feb 13 15:38:52.760187 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:38:52.788110 ignition[692]: Ignition 2.20.0 Feb 13 15:38:52.760959 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:52.788121 ignition[692]: Stage: fetch-offline Feb 13 15:38:52.760966 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:52.788186 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:52.764205 systemd-networkd[750]: eth0: Link UP Feb 13 15:38:52.788204 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:52.764212 systemd-networkd[750]: eth0: Gained carrier Feb 13 15:38:52.788389 ignition[692]: parsed url from cmdline: "" Feb 13 15:38:52.764226 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:52.788396 ignition[692]: no config URL provided Feb 13 15:38:52.783007 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:38:52.788411 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:38:52.787789 systemd[1]: Reached target network.target - Network. Feb 13 15:38:52.788426 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:38:52.797056 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:52.788443 ignition[692]: failed to fetch config: resource requires networking Feb 13 15:38:52.823789 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:38:52.788788 ignition[692]: Ignition finished successfully Feb 13 15:38:52.855056 unknown[760]: fetched base config from "system" Feb 13 15:38:52.842715 ignition[760]: Ignition 2.20.0 Feb 13 15:38:52.855070 unknown[760]: fetched base config from "system" Feb 13 15:38:52.842726 ignition[760]: Stage: fetch Feb 13 15:38:52.855079 unknown[760]: fetched user config from "gcp" Feb 13 15:38:52.842990 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:52.857864 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:38:52.843003 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:52.875864 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:38:52.843127 ignition[760]: parsed url from cmdline: "" Feb 13 15:38:52.920304 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:38:52.843134 ignition[760]: no config URL provided Feb 13 15:38:52.952282 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:38:52.843144 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:38:52.985245 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:38:52.843159 ignition[760]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:38:52.993759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:52.843195 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 15:38:53.019887 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:38:52.848378 ignition[760]: GET result: OK Feb 13 15:38:53.040915 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:38:52.848466 ignition[760]: parsing config with SHA512: de71f0959b7c686c79201afc3f2214bde91127c935026e1df4d32ff8186891bb4181e42c51248c0b081f1e12b25bf219cbbcafd7a98c1a25267e9ab4c6b4f2ae Feb 13 15:38:53.060896 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:38:52.855680 ignition[760]: fetch: fetch complete Feb 13 15:38:53.080870 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:38:52.855687 ignition[760]: fetch: fetch passed Feb 13 15:38:53.103184 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:38:52.855743 ignition[760]: Ignition finished successfully Feb 13 15:38:52.917471 ignition[765]: Ignition 2.20.0 Feb 13 15:38:52.917480 ignition[765]: Stage: kargs Feb 13 15:38:52.917736 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:52.917749 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:52.919011 ignition[765]: kargs: kargs passed Feb 13 15:38:52.919070 ignition[765]: Ignition finished successfully Feb 13 15:38:52.972975 ignition[771]: Ignition 2.20.0 Feb 13 15:38:52.972986 ignition[771]: Stage: disks Feb 13 15:38:52.973213 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:52.973301 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:52.974379 ignition[771]: disks: disks passed Feb 13 15:38:52.974459 ignition[771]: Ignition finished successfully Feb 13 15:38:53.142308 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:38:53.352580 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:38:53.356812 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:38:53.514572 kernel: EXT4-fs (sda9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none. Feb 13 15:38:53.516158 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:38:53.517178 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:38:53.546724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:38:53.565686 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:38:53.586331 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:38:53.641885 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (788) Feb 13 15:38:53.641935 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:53.641952 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:53.641967 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:38:53.586500 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:38:53.680819 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:38:53.680877 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:38:53.586574 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:53.599356 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:38:53.664721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:38:53.694823 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:38:53.824985 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:38:53.836729 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:38:53.847024 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:38:53.857694 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:38:53.999587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:54.006838 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:38:54.044588 kernel: BTRFS info (device sda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:54.045924 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:38:54.056246 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:38:54.088048 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:38:54.105875 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:38:54.116890 ignition[900]: INFO : Ignition 2.20.0 Feb 13 15:38:54.116890 ignition[900]: INFO : Stage: mount Feb 13 15:38:54.116890 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:54.116890 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:54.116890 ignition[900]: INFO : mount: mount passed Feb 13 15:38:54.116890 ignition[900]: INFO : Ignition finished successfully Feb 13 15:38:54.131767 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:38:54.526888 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:38:54.550562 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (912) Feb 13 15:38:54.568500 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:54.568631 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:54.568659 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:38:54.590614 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:38:54.590730 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:38:54.594497 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:38:54.636419 ignition[929]: INFO : Ignition 2.20.0 Feb 13 15:38:54.636419 ignition[929]: INFO : Stage: files Feb 13 15:38:54.651086 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:54.651086 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:54.651086 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:38:54.651086 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:38:54.651086 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:38:54.651086 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:38:54.651086 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:38:54.651086 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:38:54.651086 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 15:38:54.651086 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:54.645713 unknown[929]: wrote ssh authorized keys file for user: core Feb 13 15:38:54.786743 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:38:54.747747 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 15:38:54.926670 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 15:38:54.943723 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:38:54.943723 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:55.237174 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:38:55.397235 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:38:55.412683 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 15:38:55.646808 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:38:56.063804 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:38:56.063804 ignition[929]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:38:56.103866 ignition[929]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:56.103866 ignition[929]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:56.103866 ignition[929]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:38:56.103866 ignition[929]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:56.103866 ignition[929]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:56.103866 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:56.103866 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:56.103866 ignition[929]: INFO : files: files passed Feb 13 15:38:56.103866 ignition[929]: INFO : Ignition finished successfully Feb 13 15:38:56.068956 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:38:56.098801 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:38:56.120717 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:38:56.133447 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:38:56.316797 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:56.316797 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:56.133607 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:38:56.355862 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:56.225612 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:56.240928 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:38:56.264806 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:38:56.354087 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:38:56.354239 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:38:56.367156 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:38:56.390917 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:38:56.411969 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:38:56.418879 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:38:56.469378 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:56.494896 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:38:56.520439 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:56.533175 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:56.555142 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:38:56.575132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:38:56.575402 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:56.607227 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:38:56.634134 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:38:56.651077 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:38:56.672089 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:56.682280 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:56.699348 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:38:56.727092 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:56.748181 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:38:56.768105 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:38:56.778188 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:38:56.803926 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:38:56.804218 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:56.834174 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:56.852095 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:56.870944 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:38:56.871180 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:56.892008 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:38:56.892249 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:56.923070 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:38:56.923393 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:57.058777 ignition[982]: INFO : Ignition 2.20.0 Feb 13 15:38:57.058777 ignition[982]: INFO : Stage: umount Feb 13 15:38:57.058777 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:57.058777 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:57.058777 ignition[982]: INFO : umount: umount passed Feb 13 15:38:57.058777 ignition[982]: INFO : Ignition finished successfully Feb 13 15:38:56.943214 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:38:56.943467 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:38:56.962915 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:38:56.980775 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:38:56.981306 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:56.996225 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:38:57.023785 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:38:57.024182 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:57.036299 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:38:57.036526 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:57.063198 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:38:57.064635 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:38:57.064754 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:38:57.078443 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:38:57.078598 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:38:57.086966 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:38:57.087189 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:38:57.112810 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:38:57.112917 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:38:57.133852 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:38:57.133961 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:38:57.150819 systemd[1]: Stopped target network.target - Network. Feb 13 15:38:57.168712 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:38:57.168845 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:57.187791 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:38:57.202716 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:38:57.206697 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:57.223713 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:38:57.238727 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:38:57.256798 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:38:57.256894 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:57.274815 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:38:57.274945 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:57.294760 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:38:57.294877 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:38:57.314787 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:38:57.314888 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:57.334825 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:38:57.334927 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:57.354992 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:38:57.381989 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:38:57.410430 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:38:57.410599 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:38:57.431203 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:38:57.431495 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:38:57.431655 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:38:57.437435 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:38:57.939724 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 15:38:57.437803 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:38:57.437920 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:38:57.457631 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:38:57.457698 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:57.476689 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:38:57.496674 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:38:57.496820 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:57.507803 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:38:57.507907 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:57.518017 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:38:57.518104 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:57.525985 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:38:57.526061 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:57.543133 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:57.574201 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:38:57.574328 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:38:57.574889 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:38:57.575053 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:57.602006 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:38:57.602112 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:57.621894 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:38:57.621958 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:57.631970 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:38:57.632053 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:57.671065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:38:57.671161 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:57.694982 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:57.695081 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:57.728753 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:38:57.741838 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:38:57.741956 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:57.759136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:57.759208 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:57.789111 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:38:57.789311 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:38:57.789874 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:38:57.789992 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:38:57.808183 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:38:57.808301 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:38:57.819271 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:38:57.841726 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:38:57.887462 systemd[1]: Switching root. Feb 13 15:38:58.367741 systemd-journald[184]: Journal stopped Feb 13 15:39:00.947450 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:39:00.947558 kernel: SELinux: policy capability open_perms=1 Feb 13 15:39:00.947583 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:39:00.947602 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:39:00.947622 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:39:00.947641 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:39:00.947664 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:39:00.947684 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:39:00.947712 kernel: audit: type=1403 audit(1739461138.637:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:39:00.947755 systemd[1]: Successfully loaded SELinux policy in 97.876ms. Feb 13 15:39:00.947781 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.576ms. Feb 13 15:39:00.947807 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:39:00.947829 systemd[1]: Detected virtualization google. Feb 13 15:39:00.947851 systemd[1]: Detected architecture x86-64. Feb 13 15:39:00.947885 systemd[1]: Detected first boot. Feb 13 15:39:00.947910 systemd[1]: Initializing machine ID from random generator. Feb 13 15:39:00.947935 zram_generator::config[1025]: No configuration found. Feb 13 15:39:00.947969 kernel: Guest personality initialized and is inactive Feb 13 15:39:00.947991 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 15:39:00.948018 kernel: Initialized host personality Feb 13 15:39:00.948039 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:39:00.948061 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:39:00.948086 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:39:00.948108 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:39:00.948129 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:39:00.948151 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:39:00.948174 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:39:00.948194 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:39:00.948222 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:39:00.948244 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:39:00.948265 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:39:00.948286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:39:00.948309 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:39:00.948329 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:39:00.948349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:39:00.948379 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:39:00.948400 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:39:00.948421 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:39:00.948441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:39:00.948466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:39:00.948497 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:39:00.948520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:39:00.948580 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:39:00.948608 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:39:00.948627 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:39:00.948647 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:39:00.948665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:39:00.948684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:39:00.948704 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:39:00.948722 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:39:00.948741 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:39:00.948767 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:39:00.948788 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:39:00.948808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:39:00.948830 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:39:00.948853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:39:00.948874 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:39:00.948895 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:39:00.948916 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:39:00.948944 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:39:00.948966 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:00.948987 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:39:00.949007 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:39:00.949032 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:39:00.949054 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:39:00.949075 systemd[1]: Reached target machines.target - Containers. Feb 13 15:39:00.949097 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:39:00.949119 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:39:00.949143 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:39:00.949165 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:39:00.949186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:39:00.949206 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:39:00.949232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:39:00.949252 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:39:00.949289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:39:00.949311 kernel: ACPI: bus type drm_connector registered Feb 13 15:39:00.949332 kernel: fuse: init (API version 7.39) Feb 13 15:39:00.949353 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:39:00.949375 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:39:00.949403 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:39:00.949425 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:39:00.949445 kernel: loop: module loaded Feb 13 15:39:00.949466 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:39:00.949490 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:39:00.949514 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:39:00.949557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:39:00.949627 systemd-journald[1113]: Collecting audit messages is disabled. Feb 13 15:39:00.949686 systemd-journald[1113]: Journal started Feb 13 15:39:00.949732 systemd-journald[1113]: Runtime Journal (/run/log/journal/4265d8deaa73465d9b6a3a348ec10074) is 8M, max 148.6M, 140.6M free. Feb 13 15:38:59.672146 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:38:59.686497 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:38:59.687176 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:39:00.969586 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:39:00.981621 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:39:01.023589 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:39:01.048749 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:39:01.068552 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:39:01.068652 systemd[1]: Stopped verity-setup.service. Feb 13 15:39:01.111195 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:01.111361 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:39:01.124623 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:39:01.136240 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:39:01.147230 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:39:01.158100 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:39:01.168076 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:39:01.179010 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:39:01.189310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:39:01.201285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:39:01.213231 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:39:01.213596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:39:01.225156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:39:01.225472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:39:01.237367 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:39:01.237774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:39:01.248188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:39:01.248504 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:39:01.260269 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:39:01.260615 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:39:01.271244 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:39:01.271577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:39:01.282338 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:39:01.293307 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:39:01.305336 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:39:01.317347 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:39:01.329336 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:39:01.355711 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:39:01.371717 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:39:01.393747 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:39:01.403740 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:39:01.403821 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:39:01.417034 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:39:01.433829 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:39:01.456117 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:39:01.465983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:39:01.478586 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:39:01.494348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:39:01.505826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:39:01.520472 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:39:01.530810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:39:01.542837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:39:01.550770 systemd-journald[1113]: Time spent on flushing to /var/log/journal/4265d8deaa73465d9b6a3a348ec10074 is 55.040ms for 950 entries. Feb 13 15:39:01.550770 systemd-journald[1113]: System Journal (/var/log/journal/4265d8deaa73465d9b6a3a348ec10074) is 8M, max 584.8M, 576.8M free. Feb 13 15:39:01.647117 systemd-journald[1113]: Received client request to flush runtime journal. Feb 13 15:39:01.647189 kernel: loop0: detected capacity change from 0 to 138176 Feb 13 15:39:01.570743 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:39:01.590055 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:39:01.606887 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:39:01.626038 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:39:01.640004 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:39:01.657861 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:39:01.668690 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:39:01.683165 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:39:01.698391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:39:01.727509 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:39:01.744566 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:39:01.755933 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:39:01.772973 udevadm[1152]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:39:01.782905 kernel: loop1: detected capacity change from 0 to 52152 Feb 13 15:39:01.808522 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:39:01.829749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:39:01.842524 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:39:01.851031 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:39:01.870586 kernel: loop2: detected capacity change from 0 to 218376 Feb 13 15:39:01.938372 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Feb 13 15:39:01.939228 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Feb 13 15:39:01.954096 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:39:02.007840 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 15:39:02.107590 kernel: loop4: detected capacity change from 0 to 138176 Feb 13 15:39:02.158580 kernel: loop5: detected capacity change from 0 to 52152 Feb 13 15:39:02.209352 kernel: loop6: detected capacity change from 0 to 218376 Feb 13 15:39:02.264592 kernel: loop7: detected capacity change from 0 to 147912 Feb 13 15:39:02.321894 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 15:39:02.323981 (sd-merge)[1173]: Merged extensions into '/usr'. Feb 13 15:39:02.336250 systemd[1]: Reload requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:39:02.336506 systemd[1]: Reloading... Feb 13 15:39:02.506092 zram_generator::config[1197]: No configuration found. Feb 13 15:39:02.754428 ldconfig[1144]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:39:02.807340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:02.957579 systemd[1]: Reloading finished in 619 ms. Feb 13 15:39:02.976231 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:39:02.986580 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:39:03.014877 systemd[1]: Starting ensure-sysext.service... Feb 13 15:39:03.033076 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:39:03.075947 systemd[1]: Reload requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:39:03.075978 systemd[1]: Reloading... Feb 13 15:39:03.099914 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:39:03.100361 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:39:03.106024 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:39:03.107365 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 15:39:03.108769 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 15:39:03.127926 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:39:03.130599 systemd-tmpfiles[1242]: Skipping /boot Feb 13 15:39:03.169225 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:39:03.170516 systemd-tmpfiles[1242]: Skipping /boot Feb 13 15:39:03.274963 zram_generator::config[1274]: No configuration found. Feb 13 15:39:03.408432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:03.503443 systemd[1]: Reloading finished in 426 ms. Feb 13 15:39:03.520188 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:39:03.548871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:39:03.574968 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:39:03.595674 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:39:03.613809 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:39:03.641809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:39:03.664604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:39:03.687438 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:39:03.707597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:03.708152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:39:03.721664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:39:03.743673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:39:03.757589 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Feb 13 15:39:03.763257 augenrules[1340]: No rules Feb 13 15:39:03.766002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:39:03.775878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:39:03.776409 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:39:03.786346 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:39:03.794404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:03.803036 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:39:03.804772 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:39:03.817821 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:39:03.829220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:39:03.843162 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:39:03.856346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:39:03.858627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:39:03.870849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:39:03.872625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:39:03.885126 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:39:03.885861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:39:03.944337 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:39:03.956337 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:39:04.001984 systemd[1]: Finished ensure-sysext.service. Feb 13 15:39:04.029269 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Feb 13 15:39:04.030597 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Feb 13 15:39:04.040794 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:04.050801 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:39:04.061008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:39:04.070842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:39:04.092795 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:39:04.111470 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:39:04.129805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:39:04.165602 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:39:04.168401 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:39:04.174775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:39:04.174869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:39:04.190864 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 15:39:04.210210 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:39:04.194879 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:39:04.210463 augenrules[1382]: /sbin/augenrules: No change Feb 13 15:39:04.213837 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:39:04.221125 augenrules[1405]: No rules Feb 13 15:39:04.230644 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 15:39:04.239792 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:39:04.249722 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:39:04.249797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:39:04.256025 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:39:04.256391 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:39:04.260663 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:39:04.270458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:39:04.270853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:39:04.282412 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:39:04.282771 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:39:04.293342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:39:04.293853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:39:04.303273 systemd-resolved[1325]: Positive Trust Anchors: Feb 13 15:39:04.303301 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:39:04.303374 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:39:04.305400 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:39:04.305893 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:39:04.319407 systemd-resolved[1325]: Defaulting to hostname 'linux'. Feb 13 15:39:04.323446 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:39:04.334004 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:39:04.344404 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:39:04.365687 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:39:04.375667 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 15:39:04.375343 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:39:04.393829 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 15:39:04.411575 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1367) Feb 13 15:39:04.411719 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:39:04.423405 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:39:04.423596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:39:04.522645 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 15:39:04.594731 systemd-networkd[1397]: lo: Link UP Feb 13 15:39:04.594749 systemd-networkd[1397]: lo: Gained carrier Feb 13 15:39:04.602197 systemd-networkd[1397]: Enumeration completed Feb 13 15:39:04.604956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:39:04.605263 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:04.605272 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:39:04.608065 systemd-networkd[1397]: eth0: Link UP Feb 13 15:39:04.608083 systemd-networkd[1397]: eth0: Gained carrier Feb 13 15:39:04.608115 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:39:04.614900 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:39:04.624616 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:39:04.624725 systemd-networkd[1397]: eth0: DHCPv4 address 10.128.0.26/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:39:04.633703 systemd[1]: Reached target network.target - Network. Feb 13 15:39:04.652199 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:39:04.671932 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:39:04.677409 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:39:04.688339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:39:04.695086 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:39:04.698930 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:39:04.741336 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:39:04.749466 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:39:04.756016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:39:04.786500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:39:04.798223 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:39:04.810971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:39:04.821765 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:39:04.831972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:39:04.843825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:39:04.855025 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:39:04.864955 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:39:04.876774 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:39:04.887743 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:39:04.887825 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:39:04.896754 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:39:04.908473 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:39:04.920952 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:39:04.932651 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:39:04.944125 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:39:04.955785 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:39:04.974817 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:39:04.985609 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:39:05.002849 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:39:05.025130 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:39:05.032935 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:39:05.035162 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:39:05.045818 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:39:05.054893 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:39:05.054955 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:39:05.060744 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:39:05.079870 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:39:05.097205 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:39:05.114752 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:39:05.143364 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:39:05.154411 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:39:05.160983 jq[1464]: false Feb 13 15:39:05.160830 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:39:05.179930 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:39:05.197689 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:39:05.201244 coreos-metadata[1462]: Feb 13 15:39:05.201 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 15:39:05.203350 coreos-metadata[1462]: Feb 13 15:39:05.202 INFO Fetch successful Feb 13 15:39:05.203350 coreos-metadata[1462]: Feb 13 15:39:05.202 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 15:39:05.203834 coreos-metadata[1462]: Feb 13 15:39:05.203 INFO Fetch successful Feb 13 15:39:05.203834 coreos-metadata[1462]: Feb 13 15:39:05.203 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 15:39:05.205094 coreos-metadata[1462]: Feb 13 15:39:05.204 INFO Fetch successful Feb 13 15:39:05.205094 coreos-metadata[1462]: Feb 13 15:39:05.205 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 15:39:05.217933 coreos-metadata[1462]: Feb 13 15:39:05.213 INFO Fetch successful Feb 13 15:39:05.216887 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:39:05.239130 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:39:05.262874 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:39:05.266692 extend-filesystems[1465]: Found loop4 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found loop5 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found loop6 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found loop7 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found sda Feb 13 15:39:05.283672 extend-filesystems[1465]: Found sda1 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found sda2 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found sda3 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found usr Feb 13 15:39:05.283672 extend-filesystems[1465]: Found sda4 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found sda6 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found sda7 Feb 13 15:39:05.283672 extend-filesystems[1465]: Found sda9 Feb 13 15:39:05.283672 extend-filesystems[1465]: Checking size of /dev/sda9 Feb 13 15:39:05.461745 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 15:39:05.461813 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 15:39:05.274315 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 15:39:05.291780 ntpd[1469]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:23:52 UTC 2025 (1): Starting Feb 13 15:39:05.462475 extend-filesystems[1465]: Resized partition /dev/sda9 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:23:52 UTC 2025 (1): Starting Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: ---------------------------------------------------- Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: corporation. Support and training for ntp-4 are Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: available at https://www.nwtime.org/support Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: ---------------------------------------------------- Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: proto: precision = 0.088 usec (-23) Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: basedate set to 2025-02-01 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: gps base set to 2025-02-02 (week 2352) Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: Listen normally on 3 eth0 10.128.0.26:123 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: Listen normally on 4 lo [::1]:123 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: bind(21) AF_INET6 fe80::4001:aff:fe80:1a%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1a%2#123 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: failed to init interface for address fe80::4001:aff:fe80:1a%2 Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: Listening on routing socket on fd #21 for interface updates Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:39:05.470161 ntpd[1469]: 13 Feb 15:39:05 ntpd[1469]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:39:05.277200 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:39:05.291814 ntpd[1469]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:39:05.472885 update_engine[1485]: I20250213 15:39:05.379758 1485 main.cc:92] Flatcar Update Engine starting Feb 13 15:39:05.472885 update_engine[1485]: I20250213 15:39:05.390331 1485 update_check_scheduler.cc:74] Next update check in 11m57s Feb 13 15:39:05.477669 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:39:05.477669 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:39:05.477669 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 15:39:05.477669 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 15:39:05.553025 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1366) Feb 13 15:39:05.281840 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:39:05.291829 ntpd[1469]: ---------------------------------------------------- Feb 13 15:39:05.553260 jq[1486]: true Feb 13 15:39:05.553681 extend-filesystems[1465]: Resized filesystem in /dev/sda9 Feb 13 15:39:05.285175 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:39:05.291843 ntpd[1469]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:39:05.315387 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:39:05.291856 ntpd[1469]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:39:05.345656 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:39:05.291869 ntpd[1469]: corporation. Support and training for ntp-4 are Feb 13 15:39:05.387245 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:39:05.291883 ntpd[1469]: available at https://www.nwtime.org/support Feb 13 15:39:05.387608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:39:05.291897 ntpd[1469]: ---------------------------------------------------- Feb 13 15:39:05.388169 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:39:05.293644 dbus-daemon[1463]: [system] SELinux support is enabled Feb 13 15:39:05.389189 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:39:05.295927 ntpd[1469]: proto: precision = 0.088 usec (-23) Feb 13 15:39:05.441210 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:39:05.296352 ntpd[1469]: basedate set to 2025-02-01 Feb 13 15:39:05.441582 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:39:05.296375 ntpd[1469]: gps base set to 2025-02-02 (week 2352) Feb 13 15:39:05.456114 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:39:05.297419 dbus-daemon[1463]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1397 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:39:05.456491 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:39:05.307454 ntpd[1469]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:39:05.539766 systemd-logind[1480]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:39:05.307548 ntpd[1469]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:39:05.539801 systemd-logind[1480]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:39:05.307829 ntpd[1469]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:39:05.539835 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:39:05.307897 ntpd[1469]: Listen normally on 3 eth0 10.128.0.26:123 Feb 13 15:39:05.552790 systemd-logind[1480]: New seat seat0. Feb 13 15:39:05.307956 ntpd[1469]: Listen normally on 4 lo [::1]:123 Feb 13 15:39:05.556236 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:39:05.308023 ntpd[1469]: bind(21) AF_INET6 fe80::4001:aff:fe80:1a%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:39:05.308054 ntpd[1469]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1a%2#123 Feb 13 15:39:05.308077 ntpd[1469]: failed to init interface for address fe80::4001:aff:fe80:1a%2 Feb 13 15:39:05.308136 ntpd[1469]: Listening on routing socket on fd #21 for interface updates Feb 13 15:39:05.313767 ntpd[1469]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:39:05.313812 ntpd[1469]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:39:05.550761 dbus-daemon[1463]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:39:05.632696 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:39:05.642368 jq[1499]: true Feb 13 15:39:05.652203 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:39:05.655556 tar[1498]: linux-amd64/LICENSE Feb 13 15:39:05.656139 tar[1498]: linux-amd64/helm Feb 13 15:39:05.699084 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:39:05.712447 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:39:05.713088 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:39:05.715889 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:39:05.737976 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:39:05.747721 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:39:05.748011 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:39:05.769990 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:39:05.906559 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:39:05.906304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:39:05.966920 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:39:05.968282 dbus-daemon[1463]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:39:05.974117 dbus-daemon[1463]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1522 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:39:05.990898 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:39:06.013973 systemd[1]: Starting sshkeys.service... Feb 13 15:39:06.071356 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:39:06.092157 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:39:06.131999 polkitd[1539]: Started polkitd version 121 Feb 13 15:39:06.169814 coreos-metadata[1542]: Feb 13 15:39:06.167 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 15:39:06.173142 coreos-metadata[1542]: Feb 13 15:39:06.170 INFO Fetch failed with 404: resource not found Feb 13 15:39:06.173142 coreos-metadata[1542]: Feb 13 15:39:06.170 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 15:39:06.173142 coreos-metadata[1542]: Feb 13 15:39:06.171 INFO Fetch successful Feb 13 15:39:06.173142 coreos-metadata[1542]: Feb 13 15:39:06.171 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 15:39:06.173142 coreos-metadata[1542]: Feb 13 15:39:06.172 INFO Fetch failed with 404: resource not found Feb 13 15:39:06.173142 coreos-metadata[1542]: Feb 13 15:39:06.172 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 15:39:06.174472 coreos-metadata[1542]: Feb 13 15:39:06.173 INFO Fetch failed with 404: resource not found Feb 13 15:39:06.174472 coreos-metadata[1542]: Feb 13 15:39:06.173 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 15:39:06.174472 coreos-metadata[1542]: Feb 13 15:39:06.174 INFO Fetch successful Feb 13 15:39:06.176471 polkitd[1539]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:39:06.176768 polkitd[1539]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:39:06.177854 unknown[1542]: wrote ssh authorized keys file for user: core Feb 13 15:39:06.183682 polkitd[1539]: Finished loading, compiling and executing 2 rules Feb 13 15:39:06.185477 dbus-daemon[1463]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:39:06.186645 polkitd[1539]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:39:06.191463 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:39:06.235654 systemd-hostnamed[1522]: Hostname set to (transient) Feb 13 15:39:06.237802 update-ssh-keys[1553]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:39:06.240040 systemd-resolved[1325]: System hostname changed to 'ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal'. Feb 13 15:39:06.244101 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:39:06.264655 systemd[1]: Finished sshkeys.service. Feb 13 15:39:06.293479 ntpd[1469]: bind(24) AF_INET6 fe80::4001:aff:fe80:1a%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:39:06.295948 ntpd[1469]: 13 Feb 15:39:06 ntpd[1469]: bind(24) AF_INET6 fe80::4001:aff:fe80:1a%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:39:06.295948 ntpd[1469]: 13 Feb 15:39:06 ntpd[1469]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:1a%2#123 Feb 13 15:39:06.295948 ntpd[1469]: 13 Feb 15:39:06 ntpd[1469]: failed to init interface for address fe80::4001:aff:fe80:1a%2 Feb 13 15:39:06.295220 ntpd[1469]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:1a%2#123 Feb 13 15:39:06.295258 ntpd[1469]: failed to init interface for address fe80::4001:aff:fe80:1a%2 Feb 13 15:39:06.339862 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:39:06.413924 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:39:06.433995 containerd[1504]: time="2025-02-13T15:39:06.432167107Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:39:06.462758 systemd-networkd[1397]: eth0: Gained IPv6LL Feb 13 15:39:06.472047 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:39:06.485646 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:39:06.500248 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:39:06.521750 containerd[1504]: time="2025-02-13T15:39:06.521631384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:06.523321 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:39:06.527583 containerd[1504]: time="2025-02-13T15:39:06.526487653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:06.527583 containerd[1504]: time="2025-02-13T15:39:06.526604389Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:39:06.527583 containerd[1504]: time="2025-02-13T15:39:06.526650788Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:39:06.527583 containerd[1504]: time="2025-02-13T15:39:06.527055864Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:39:06.527583 containerd[1504]: time="2025-02-13T15:39:06.527095684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:06.527583 containerd[1504]: time="2025-02-13T15:39:06.527248188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:06.527583 containerd[1504]: time="2025-02-13T15:39:06.527281346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:06.528152 containerd[1504]: time="2025-02-13T15:39:06.528115650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:06.528243 containerd[1504]: time="2025-02-13T15:39:06.528227573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:06.528337 containerd[1504]: time="2025-02-13T15:39:06.528315099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:06.528430 containerd[1504]: time="2025-02-13T15:39:06.528411767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:06.528687 containerd[1504]: time="2025-02-13T15:39:06.528663434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:06.529147 containerd[1504]: time="2025-02-13T15:39:06.529114071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:06.529594 containerd[1504]: time="2025-02-13T15:39:06.529564446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:06.529992 containerd[1504]: time="2025-02-13T15:39:06.529690642Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:39:06.529992 containerd[1504]: time="2025-02-13T15:39:06.529867529Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:39:06.529992 containerd[1504]: time="2025-02-13T15:39:06.529941809Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:39:06.538910 containerd[1504]: time="2025-02-13T15:39:06.538847936Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:39:06.539473 containerd[1504]: time="2025-02-13T15:39:06.539368280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:39:06.539597 containerd[1504]: time="2025-02-13T15:39:06.539493556Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:39:06.540206 containerd[1504]: time="2025-02-13T15:39:06.540158191Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:39:06.540284 containerd[1504]: time="2025-02-13T15:39:06.540232716Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:39:06.544629 containerd[1504]: time="2025-02-13T15:39:06.541685489Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:39:06.541281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:06.545026 containerd[1504]: time="2025-02-13T15:39:06.544732030Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:39:06.545679 containerd[1504]: time="2025-02-13T15:39:06.545182631Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:39:06.545762 containerd[1504]: time="2025-02-13T15:39:06.545695140Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:39:06.545762 containerd[1504]: time="2025-02-13T15:39:06.545729453Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:39:06.545762 containerd[1504]: time="2025-02-13T15:39:06.545755983Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:39:06.546648 containerd[1504]: time="2025-02-13T15:39:06.545780947Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:39:06.546738 containerd[1504]: time="2025-02-13T15:39:06.546664613Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:39:06.546738 containerd[1504]: time="2025-02-13T15:39:06.546696070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:39:06.546738 containerd[1504]: time="2025-02-13T15:39:06.546725993Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:39:06.546868 containerd[1504]: time="2025-02-13T15:39:06.546752901Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:39:06.546868 containerd[1504]: time="2025-02-13T15:39:06.546777018Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:39:06.546868 containerd[1504]: time="2025-02-13T15:39:06.546799261Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:39:06.546868 containerd[1504]: time="2025-02-13T15:39:06.546839528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547032 containerd[1504]: time="2025-02-13T15:39:06.546869187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547032 containerd[1504]: time="2025-02-13T15:39:06.546892708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547032 containerd[1504]: time="2025-02-13T15:39:06.546918823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547032 containerd[1504]: time="2025-02-13T15:39:06.546941260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547032 containerd[1504]: time="2025-02-13T15:39:06.546966766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547032 containerd[1504]: time="2025-02-13T15:39:06.547004151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547299 containerd[1504]: time="2025-02-13T15:39:06.547031676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547299 containerd[1504]: time="2025-02-13T15:39:06.547056490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547299 containerd[1504]: time="2025-02-13T15:39:06.547087748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547299 containerd[1504]: time="2025-02-13T15:39:06.547110411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547299 containerd[1504]: time="2025-02-13T15:39:06.547133931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.547299 containerd[1504]: time="2025-02-13T15:39:06.547159371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.550563 containerd[1504]: time="2025-02-13T15:39:06.548568697Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:39:06.550563 containerd[1504]: time="2025-02-13T15:39:06.549229863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.550563 containerd[1504]: time="2025-02-13T15:39:06.549273568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.550563 containerd[1504]: time="2025-02-13T15:39:06.549306094Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:39:06.550563 containerd[1504]: time="2025-02-13T15:39:06.549804630Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:39:06.550829 containerd[1504]: time="2025-02-13T15:39:06.550568982Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:39:06.550829 containerd[1504]: time="2025-02-13T15:39:06.550602149Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:39:06.550829 containerd[1504]: time="2025-02-13T15:39:06.550630621Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:39:06.550829 containerd[1504]: time="2025-02-13T15:39:06.550648141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.550829 containerd[1504]: time="2025-02-13T15:39:06.550672986Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:39:06.550829 containerd[1504]: time="2025-02-13T15:39:06.550693348Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:39:06.550829 containerd[1504]: time="2025-02-13T15:39:06.550713686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:39:06.555805 containerd[1504]: time="2025-02-13T15:39:06.555693622Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:39:06.556110 containerd[1504]: time="2025-02-13T15:39:06.555806938Z" level=info msg="Connect containerd service" Feb 13 15:39:06.556110 containerd[1504]: time="2025-02-13T15:39:06.555895463Z" level=info msg="using legacy CRI server" Feb 13 15:39:06.556110 containerd[1504]: time="2025-02-13T15:39:06.555912482Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:39:06.556234 containerd[1504]: time="2025-02-13T15:39:06.556171249Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:39:06.564966 containerd[1504]: time="2025-02-13T15:39:06.564913606Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:39:06.566285 containerd[1504]: time="2025-02-13T15:39:06.565519564Z" level=info msg="Start subscribing containerd event" Feb 13 15:39:06.566465 containerd[1504]: time="2025-02-13T15:39:06.566446798Z" level=info msg="Start recovering state" Feb 13 15:39:06.566562 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:39:06.567097 containerd[1504]: time="2025-02-13T15:39:06.567065575Z" level=info msg="Start event monitor" Feb 13 15:39:06.567845 containerd[1504]: time="2025-02-13T15:39:06.567590948Z" level=info msg="Start snapshots syncer" Feb 13 15:39:06.567845 containerd[1504]: time="2025-02-13T15:39:06.567625114Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:39:06.567845 containerd[1504]: time="2025-02-13T15:39:06.567641980Z" level=info msg="Start streaming server" Feb 13 15:39:06.568332 containerd[1504]: time="2025-02-13T15:39:06.568304555Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:39:06.569574 containerd[1504]: time="2025-02-13T15:39:06.568867319Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:39:06.569574 containerd[1504]: time="2025-02-13T15:39:06.568979245Z" level=info msg="containerd successfully booted in 0.140743s" Feb 13 15:39:06.585706 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 15:39:06.596481 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:39:06.607990 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:39:06.613846 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:39:06.622995 init.sh[1578]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 15:39:06.622995 init.sh[1578]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 15:39:06.622995 init.sh[1578]: + /usr/bin/google_instance_setup Feb 13 15:39:06.653339 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:39:06.664931 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:39:06.712683 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:39:06.736152 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:39:06.753336 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:39:06.764158 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:39:07.019384 tar[1498]: linux-amd64/README.md Feb 13 15:39:07.046175 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:39:07.350809 instance-setup[1582]: INFO Running google_set_multiqueue. Feb 13 15:39:07.373686 instance-setup[1582]: INFO Set channels for eth0 to 2. Feb 13 15:39:07.379223 instance-setup[1582]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 15:39:07.382479 instance-setup[1582]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 15:39:07.382898 instance-setup[1582]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 15:39:07.384884 instance-setup[1582]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 15:39:07.385635 instance-setup[1582]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 15:39:07.387717 instance-setup[1582]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 15:39:07.388235 instance-setup[1582]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 15:39:07.390098 instance-setup[1582]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 15:39:07.401493 instance-setup[1582]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 15:39:07.406083 instance-setup[1582]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 15:39:07.408242 instance-setup[1582]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 15:39:07.408302 instance-setup[1582]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 15:39:07.441121 init.sh[1578]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 15:39:07.616968 startup-script[1626]: INFO Starting startup scripts. Feb 13 15:39:07.626399 startup-script[1626]: INFO No startup scripts found in metadata. Feb 13 15:39:07.626465 startup-script[1626]: INFO Finished running startup scripts. Feb 13 15:39:07.660976 init.sh[1578]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 15:39:07.660976 init.sh[1578]: + daemon_pids=() Feb 13 15:39:07.660976 init.sh[1578]: + for d in accounts clock_skew network Feb 13 15:39:07.660976 init.sh[1578]: + daemon_pids+=($!) Feb 13 15:39:07.660976 init.sh[1578]: + for d in accounts clock_skew network Feb 13 15:39:07.660976 init.sh[1578]: + daemon_pids+=($!) Feb 13 15:39:07.660976 init.sh[1578]: + for d in accounts clock_skew network Feb 13 15:39:07.661918 init.sh[1578]: + daemon_pids+=($!) Feb 13 15:39:07.661918 init.sh[1578]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 15:39:07.661918 init.sh[1578]: + /usr/bin/systemd-notify --ready Feb 13 15:39:07.662522 init.sh[1630]: + /usr/bin/google_clock_skew_daemon Feb 13 15:39:07.662884 init.sh[1631]: + /usr/bin/google_network_daemon Feb 13 15:39:07.663164 init.sh[1629]: + /usr/bin/google_accounts_daemon Feb 13 15:39:07.694491 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 15:39:07.705833 init.sh[1578]: + wait -n 1629 1630 1631 Feb 13 15:39:08.074271 google-clock-skew[1630]: INFO Starting Google Clock Skew daemon. Feb 13 15:39:08.089519 google-clock-skew[1630]: INFO Clock drift token has changed: 0. Feb 13 15:39:08.107901 google-networking[1631]: INFO Starting Google Networking daemon. Feb 13 15:39:08.170362 groupadd[1641]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 15:39:08.177775 groupadd[1641]: group added to /etc/gshadow: name=google-sudoers Feb 13 15:39:08.237860 groupadd[1641]: new group: name=google-sudoers, GID=1000 Feb 13 15:39:08.273118 google-accounts[1629]: INFO Starting Google Accounts daemon. Feb 13 15:39:08.287925 google-accounts[1629]: WARNING OS Login not installed. Feb 13 15:39:08.290154 google-accounts[1629]: INFO Creating a new user account for 0. Feb 13 15:39:08.298048 init.sh[1649]: useradd: invalid user name '0': use --badname to ignore Feb 13 15:39:08.298410 google-accounts[1629]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 15:39:08.507574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:08.519621 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:39:08.525189 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:08.532700 systemd[1]: Startup finished in 1.060s (kernel) + 9.854s (initrd) + 9.981s (userspace) = 20.896s. Feb 13 15:39:09.001190 google-clock-skew[1630]: INFO Synced system time with hardware clock. Feb 13 15:39:09.001615 systemd-resolved[1325]: Clock change detected. Flushing caches. Feb 13 15:39:09.673437 ntpd[1469]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:1a%2]:123 Feb 13 15:39:09.674047 ntpd[1469]: 13 Feb 15:39:09 ntpd[1469]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:1a%2]:123 Feb 13 15:39:09.855626 kubelet[1656]: E0213 15:39:09.855524 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:09.859980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:09.860259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:09.860909 systemd[1]: kubelet.service: Consumed 1.293s CPU time, 255.1M memory peak. Feb 13 15:39:15.351343 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:39:15.360868 systemd[1]: Started sshd@0-10.128.0.26:22-139.178.68.195:33928.service - OpenSSH per-connection server daemon (139.178.68.195:33928). Feb 13 15:39:15.666163 sshd[1668]: Accepted publickey for core from 139.178.68.195 port 33928 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:15.669159 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:15.684698 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:39:15.690763 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:39:15.695468 systemd-logind[1480]: New session 1 of user core. Feb 13 15:39:15.720765 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:39:15.732927 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:39:15.752355 (systemd)[1672]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:39:15.756366 systemd-logind[1480]: New session c1 of user core. Feb 13 15:39:15.944358 systemd[1672]: Queued start job for default target default.target. Feb 13 15:39:15.951391 systemd[1672]: Created slice app.slice - User Application Slice. Feb 13 15:39:15.951452 systemd[1672]: Reached target paths.target - Paths. Feb 13 15:39:15.951552 systemd[1672]: Reached target timers.target - Timers. Feb 13 15:39:15.953741 systemd[1672]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:39:15.980608 systemd[1672]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:39:15.980857 systemd[1672]: Reached target sockets.target - Sockets. Feb 13 15:39:15.980953 systemd[1672]: Reached target basic.target - Basic System. Feb 13 15:39:15.981034 systemd[1672]: Reached target default.target - Main User Target. Feb 13 15:39:15.981099 systemd[1672]: Startup finished in 213ms. Feb 13 15:39:15.981437 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:39:15.995876 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:39:16.231872 systemd[1]: Started sshd@1-10.128.0.26:22-139.178.68.195:33932.service - OpenSSH per-connection server daemon (139.178.68.195:33932). Feb 13 15:39:16.523462 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 33932 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:16.525509 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:16.532847 systemd-logind[1480]: New session 2 of user core. Feb 13 15:39:16.543714 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:39:16.739136 sshd[1685]: Connection closed by 139.178.68.195 port 33932 Feb 13 15:39:16.740303 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:16.746010 systemd[1]: sshd@1-10.128.0.26:22-139.178.68.195:33932.service: Deactivated successfully. Feb 13 15:39:16.748956 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:39:16.751475 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:39:16.753232 systemd-logind[1480]: Removed session 2. Feb 13 15:39:16.797806 systemd[1]: Started sshd@2-10.128.0.26:22-139.178.68.195:51488.service - OpenSSH per-connection server daemon (139.178.68.195:51488). Feb 13 15:39:17.097850 sshd[1691]: Accepted publickey for core from 139.178.68.195 port 51488 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:17.099663 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:17.107241 systemd-logind[1480]: New session 3 of user core. Feb 13 15:39:17.120711 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:39:17.306331 sshd[1693]: Connection closed by 139.178.68.195 port 51488 Feb 13 15:39:17.307218 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:17.312859 systemd[1]: sshd@2-10.128.0.26:22-139.178.68.195:51488.service: Deactivated successfully. Feb 13 15:39:17.315283 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:39:17.316423 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:39:17.318013 systemd-logind[1480]: Removed session 3. Feb 13 15:39:17.366067 systemd[1]: Started sshd@3-10.128.0.26:22-139.178.68.195:51500.service - OpenSSH per-connection server daemon (139.178.68.195:51500). Feb 13 15:39:17.654481 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 51500 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:17.656245 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:17.663226 systemd-logind[1480]: New session 4 of user core. Feb 13 15:39:17.669680 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:39:17.867555 sshd[1701]: Connection closed by 139.178.68.195 port 51500 Feb 13 15:39:17.868456 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:17.873746 systemd[1]: sshd@3-10.128.0.26:22-139.178.68.195:51500.service: Deactivated successfully. Feb 13 15:39:17.876159 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:39:17.877162 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:39:17.878753 systemd-logind[1480]: Removed session 4. Feb 13 15:39:17.924835 systemd[1]: Started sshd@4-10.128.0.26:22-139.178.68.195:51510.service - OpenSSH per-connection server daemon (139.178.68.195:51510). Feb 13 15:39:18.221504 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 51510 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:18.223688 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:18.230266 systemd-logind[1480]: New session 5 of user core. Feb 13 15:39:18.234617 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:39:18.420394 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:39:18.420937 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:39:18.441214 sudo[1710]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:18.483988 sshd[1709]: Connection closed by 139.178.68.195 port 51510 Feb 13 15:39:18.485743 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:18.491903 systemd[1]: sshd@4-10.128.0.26:22-139.178.68.195:51510.service: Deactivated successfully. Feb 13 15:39:18.494679 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:39:18.497014 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:39:18.498766 systemd-logind[1480]: Removed session 5. Feb 13 15:39:18.542896 systemd[1]: Started sshd@5-10.128.0.26:22-139.178.68.195:51516.service - OpenSSH per-connection server daemon (139.178.68.195:51516). Feb 13 15:39:18.854459 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 51516 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:18.856587 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:18.863120 systemd-logind[1480]: New session 6 of user core. Feb 13 15:39:18.870701 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:39:19.038332 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:39:19.038917 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:39:19.044211 sudo[1720]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:19.058154 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:39:19.058665 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:39:19.076194 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:39:19.116783 augenrules[1742]: No rules Feb 13 15:39:19.118600 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:39:19.118971 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:39:19.120404 sudo[1719]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:19.164293 sshd[1718]: Connection closed by 139.178.68.195 port 51516 Feb 13 15:39:19.165160 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:19.170047 systemd[1]: sshd@5-10.128.0.26:22-139.178.68.195:51516.service: Deactivated successfully. Feb 13 15:39:19.172463 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:39:19.174468 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:39:19.175948 systemd-logind[1480]: Removed session 6. Feb 13 15:39:19.224809 systemd[1]: Started sshd@6-10.128.0.26:22-139.178.68.195:51528.service - OpenSSH per-connection server daemon (139.178.68.195:51528). Feb 13 15:39:19.512880 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 51528 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:19.514748 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:19.521910 systemd-logind[1480]: New session 7 of user core. Feb 13 15:39:19.528607 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:39:19.693759 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:39:19.694291 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:39:20.111283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:39:20.122783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:20.233910 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:39:20.244125 (dockerd)[1774]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:39:20.501635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:20.506750 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:20.593607 kubelet[1780]: E0213 15:39:20.593463 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:20.598796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:20.599054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:20.599620 systemd[1]: kubelet.service: Consumed 253ms CPU time, 104M memory peak. Feb 13 15:39:20.800764 dockerd[1774]: time="2025-02-13T15:39:20.800559612Z" level=info msg="Starting up" Feb 13 15:39:20.944419 dockerd[1774]: time="2025-02-13T15:39:20.944306369Z" level=info msg="Loading containers: start." Feb 13 15:39:21.161403 kernel: Initializing XFRM netlink socket Feb 13 15:39:21.284064 systemd-networkd[1397]: docker0: Link UP Feb 13 15:39:21.321683 dockerd[1774]: time="2025-02-13T15:39:21.321619389Z" level=info msg="Loading containers: done." Feb 13 15:39:21.346522 dockerd[1774]: time="2025-02-13T15:39:21.346460009Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:39:21.346752 dockerd[1774]: time="2025-02-13T15:39:21.346595834Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:39:21.346836 dockerd[1774]: time="2025-02-13T15:39:21.346765325Z" level=info msg="Daemon has completed initialization" Feb 13 15:39:21.387244 dockerd[1774]: time="2025-02-13T15:39:21.387134706Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:39:21.387629 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:39:22.223414 containerd[1504]: time="2025-02-13T15:39:22.223328266Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 15:39:22.718875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274169670.mount: Deactivated successfully. Feb 13 15:39:24.251137 containerd[1504]: time="2025-02-13T15:39:24.251046638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:24.252913 containerd[1504]: time="2025-02-13T15:39:24.252842881Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28680559" Feb 13 15:39:24.254319 containerd[1504]: time="2025-02-13T15:39:24.254246422Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:24.258156 containerd[1504]: time="2025-02-13T15:39:24.258088432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:24.260035 containerd[1504]: time="2025-02-13T15:39:24.259754751Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 2.036349131s" Feb 13 15:39:24.260035 containerd[1504]: time="2025-02-13T15:39:24.259809826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 15:39:24.261114 containerd[1504]: time="2025-02-13T15:39:24.261044799Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 15:39:24.774673 systemd[1]: Started sshd@7-10.128.0.26:22-218.92.0.190:23736.service - OpenSSH per-connection server daemon (218.92.0.190:23736). Feb 13 15:39:25.782481 containerd[1504]: time="2025-02-13T15:39:25.782414840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:25.784258 containerd[1504]: time="2025-02-13T15:39:25.784189055Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24773718" Feb 13 15:39:25.785130 containerd[1504]: time="2025-02-13T15:39:25.785035726Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:25.796258 containerd[1504]: time="2025-02-13T15:39:25.796139400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:25.797669 containerd[1504]: time="2025-02-13T15:39:25.797430812Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.536134026s" Feb 13 15:39:25.797669 containerd[1504]: time="2025-02-13T15:39:25.797486908Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 15:39:25.798934 containerd[1504]: time="2025-02-13T15:39:25.798589792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 15:39:27.052006 containerd[1504]: time="2025-02-13T15:39:27.051935034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:27.053743 containerd[1504]: time="2025-02-13T15:39:27.053665466Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19172192" Feb 13 15:39:27.055343 containerd[1504]: time="2025-02-13T15:39:27.055261579Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:27.060104 containerd[1504]: time="2025-02-13T15:39:27.060038538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:27.062120 containerd[1504]: time="2025-02-13T15:39:27.061530268Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.262896295s" Feb 13 15:39:27.062120 containerd[1504]: time="2025-02-13T15:39:27.061590067Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 15:39:27.062779 containerd[1504]: time="2025-02-13T15:39:27.062702785Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 15:39:28.341654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414327746.mount: Deactivated successfully. Feb 13 15:39:28.995196 containerd[1504]: time="2025-02-13T15:39:28.995118869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:28.996550 containerd[1504]: time="2025-02-13T15:39:28.996494561Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30910734" Feb 13 15:39:28.998067 containerd[1504]: time="2025-02-13T15:39:28.998027510Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:29.000878 containerd[1504]: time="2025-02-13T15:39:29.000836530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:29.002052 containerd[1504]: time="2025-02-13T15:39:29.001709695Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.938953533s" Feb 13 15:39:29.002052 containerd[1504]: time="2025-02-13T15:39:29.001758453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 15:39:29.002495 containerd[1504]: time="2025-02-13T15:39:29.002320147Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 15:39:29.426752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938329367.mount: Deactivated successfully. Feb 13 15:39:30.570609 containerd[1504]: time="2025-02-13T15:39:30.570532397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:30.572612 containerd[1504]: time="2025-02-13T15:39:30.572317973Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Feb 13 15:39:30.574235 containerd[1504]: time="2025-02-13T15:39:30.574141740Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:30.578793 containerd[1504]: time="2025-02-13T15:39:30.578683622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:30.580724 containerd[1504]: time="2025-02-13T15:39:30.580469655Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.577817834s" Feb 13 15:39:30.580724 containerd[1504]: time="2025-02-13T15:39:30.580526895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 15:39:30.582081 containerd[1504]: time="2025-02-13T15:39:30.581966144Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:39:30.850173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:39:30.858912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:31.178634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987021330.mount: Deactivated successfully. Feb 13 15:39:31.184346 containerd[1504]: time="2025-02-13T15:39:31.183045879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:31.185966 containerd[1504]: time="2025-02-13T15:39:31.185902478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Feb 13 15:39:31.188234 containerd[1504]: time="2025-02-13T15:39:31.188189263Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:31.192484 containerd[1504]: time="2025-02-13T15:39:31.192428824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:31.195795 containerd[1504]: time="2025-02-13T15:39:31.194671221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 612.643688ms" Feb 13 15:39:31.195795 containerd[1504]: time="2025-02-13T15:39:31.194723636Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 15:39:31.197195 containerd[1504]: time="2025-02-13T15:39:31.196968726Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 15:39:31.203242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:31.216257 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:31.276553 kubelet[2111]: E0213 15:39:31.276417 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:31.279499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:31.279797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:31.280632 systemd[1]: kubelet.service: Consumed 210ms CPU time, 103.1M memory peak. Feb 13 15:39:31.601827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524878721.mount: Deactivated successfully. Feb 13 15:39:31.958824 sshd[2029]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:39:33.425550 sshd[2029]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:39:33.802197 containerd[1504]: time="2025-02-13T15:39:33.802018212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:33.804016 containerd[1504]: time="2025-02-13T15:39:33.803949537Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557903" Feb 13 15:39:33.806312 containerd[1504]: time="2025-02-13T15:39:33.806224082Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:33.810636 containerd[1504]: time="2025-02-13T15:39:33.810558650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:33.812343 containerd[1504]: time="2025-02-13T15:39:33.812171858Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.614567145s" Feb 13 15:39:33.812343 containerd[1504]: time="2025-02-13T15:39:33.812216707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 15:39:36.633881 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:39:36.711774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:36.712355 systemd[1]: kubelet.service: Consumed 210ms CPU time, 103.1M memory peak. Feb 13 15:39:36.720933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:36.773595 systemd[1]: Reload requested from client PID 2205 ('systemctl') (unit session-7.scope)... Feb 13 15:39:36.773644 systemd[1]: Reloading... Feb 13 15:39:36.954466 zram_generator::config[2248]: No configuration found. Feb 13 15:39:37.159122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:37.306280 systemd[1]: Reloading finished in 531 ms. Feb 13 15:39:37.369241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:37.384040 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:37.387250 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:37.387889 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:39:37.388260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:37.388329 systemd[1]: kubelet.service: Consumed 176ms CPU time, 91.8M memory peak. Feb 13 15:39:37.403238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:37.730388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:37.741955 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:37.804252 kubelet[2306]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:37.804252 kubelet[2306]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:37.804252 kubelet[2306]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:37.804877 kubelet[2306]: I0213 15:39:37.804344 2306 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:38.323044 kubelet[2306]: I0213 15:39:38.322990 2306 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 15:39:38.324109 kubelet[2306]: I0213 15:39:38.323250 2306 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:38.324109 kubelet[2306]: I0213 15:39:38.323989 2306 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 15:39:38.361806 kubelet[2306]: E0213 15:39:38.361753 2306 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:38.364290 kubelet[2306]: I0213 15:39:38.364079 2306 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:38.377841 kubelet[2306]: E0213 15:39:38.377770 2306 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:39:38.377841 kubelet[2306]: I0213 15:39:38.377829 2306 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:39:38.382185 kubelet[2306]: I0213 15:39:38.382149 2306 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:38.384079 kubelet[2306]: I0213 15:39:38.384000 2306 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:38.384392 kubelet[2306]: I0213 15:39:38.384070 2306 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:39:38.384596 kubelet[2306]: I0213 15:39:38.384412 2306 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:38.384596 kubelet[2306]: I0213 15:39:38.384432 2306 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 15:39:38.384709 kubelet[2306]: I0213 15:39:38.384620 2306 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:38.391325 kubelet[2306]: I0213 15:39:38.391280 2306 kubelet.go:446] "Attempting to sync node with API server" Feb 13 15:39:38.391325 kubelet[2306]: I0213 15:39:38.391323 2306 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:38.391325 kubelet[2306]: I0213 15:39:38.391357 2306 kubelet.go:352] "Adding apiserver pod source" Feb 13 15:39:38.391633 kubelet[2306]: I0213 15:39:38.391394 2306 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:38.396650 kubelet[2306]: W0213 15:39:38.396525 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Feb 13 15:39:38.396650 kubelet[2306]: E0213 15:39:38.396623 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:38.396865 kubelet[2306]: W0213 15:39:38.396726 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Feb 13 15:39:38.396865 kubelet[2306]: E0213 15:39:38.396778 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:38.396975 kubelet[2306]: I0213 15:39:38.396911 2306 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:38.397632 kubelet[2306]: I0213 15:39:38.397600 2306 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:38.399585 kubelet[2306]: W0213 15:39:38.398817 2306 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:39:38.404960 kubelet[2306]: I0213 15:39:38.404931 2306 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 15:39:38.405109 kubelet[2306]: I0213 15:39:38.405003 2306 server.go:1287] "Started kubelet" Feb 13 15:39:38.415907 kubelet[2306]: I0213 15:39:38.415863 2306 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:38.419912 kubelet[2306]: I0213 15:39:38.419883 2306 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:39:38.423467 sshd[2029]: Received disconnect from 218.92.0.190 port 23736:11: [preauth] Feb 13 15:39:38.423467 sshd[2029]: Disconnected from authenticating user root 218.92.0.190 port 23736 [preauth] Feb 13 15:39:38.426279 kubelet[2306]: I0213 15:39:38.424228 2306 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 15:39:38.426279 kubelet[2306]: E0213 15:39:38.419604 2306 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal.1823ceb9c768f2b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,UID:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 15:39:38.404958902 +0000 UTC m=+0.657066670,LastTimestamp:2025-02-13 15:39:38.404958902 +0000 UTC m=+0.657066670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,}" Feb 13 15:39:38.426279 kubelet[2306]: E0213 15:39:38.424607 2306 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" Feb 13 15:39:38.426279 kubelet[2306]: I0213 15:39:38.425324 2306 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:39:38.426279 kubelet[2306]: I0213 15:39:38.425413 2306 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:39:38.426279 kubelet[2306]: I0213 15:39:38.425787 2306 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:38.426279 kubelet[2306]: I0213 15:39:38.426132 2306 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:38.426225 systemd[1]: sshd@7-10.128.0.26:22-218.92.0.190:23736.service: Deactivated successfully. Feb 13 15:39:38.429241 kubelet[2306]: W0213 15:39:38.429182 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Feb 13 15:39:38.429683 kubelet[2306]: E0213 15:39:38.429656 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:38.431611 kubelet[2306]: E0213 15:39:38.431569 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="200ms" Feb 13 15:39:38.432029 kubelet[2306]: I0213 15:39:38.432005 2306 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:38.432249 kubelet[2306]: I0213 15:39:38.432225 2306 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:38.436507 kubelet[2306]: I0213 15:39:38.436477 2306 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:38.438483 kubelet[2306]: I0213 15:39:38.438445 2306 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:38.440138 kubelet[2306]: I0213 15:39:38.440114 2306 server.go:490] "Adding debug handlers to kubelet server" Feb 13 15:39:38.457169 kubelet[2306]: I0213 15:39:38.457116 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:38.459737 kubelet[2306]: I0213 15:39:38.459237 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:38.459737 kubelet[2306]: I0213 15:39:38.459273 2306 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 15:39:38.459737 kubelet[2306]: I0213 15:39:38.459319 2306 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 15:39:38.459737 kubelet[2306]: I0213 15:39:38.459332 2306 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 15:39:38.459737 kubelet[2306]: E0213 15:39:38.459421 2306 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:38.463587 kubelet[2306]: W0213 15:39:38.463531 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Feb 13 15:39:38.466540 kubelet[2306]: E0213 15:39:38.466502 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:38.469808 kubelet[2306]: E0213 15:39:38.469772 2306 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:39:38.482567 kubelet[2306]: I0213 15:39:38.482528 2306 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 15:39:38.482567 kubelet[2306]: I0213 15:39:38.482560 2306 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:38.482813 kubelet[2306]: I0213 15:39:38.482588 2306 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:38.525205 kubelet[2306]: E0213 15:39:38.525126 2306 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" Feb 13 15:39:38.561091 kubelet[2306]: E0213 15:39:38.560443 2306 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:39:38.592488 kubelet[2306]: I0213 15:39:38.592328 2306 policy_none.go:49] "None policy: Start" Feb 13 15:39:38.592488 kubelet[2306]: I0213 15:39:38.592414 2306 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 15:39:38.592488 kubelet[2306]: I0213 15:39:38.592440 2306 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:38.625860 kubelet[2306]: E0213 15:39:38.625780 2306 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" Feb 13 15:39:38.632823 kubelet[2306]: E0213 15:39:38.632760 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="400ms" Feb 13 15:39:38.726023 kubelet[2306]: E0213 15:39:38.725956 2306 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" Feb 13 15:39:38.766026 kubelet[2306]: E0213 15:39:38.761291 2306 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:39:38.827192 kubelet[2306]: E0213 15:39:38.827126 2306 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" Feb 13 15:39:38.848912 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:39:38.861769 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:39:38.875029 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:39:38.878675 kubelet[2306]: I0213 15:39:38.877445 2306 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:38.878675 kubelet[2306]: I0213 15:39:38.877762 2306 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:39:38.878675 kubelet[2306]: I0213 15:39:38.877782 2306 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:39:38.878675 kubelet[2306]: I0213 15:39:38.878127 2306 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:38.880564 kubelet[2306]: E0213 15:39:38.880530 2306 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 15:39:38.880689 kubelet[2306]: E0213 15:39:38.880606 2306 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" Feb 13 15:39:38.986023 kubelet[2306]: I0213 15:39:38.985966 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:38.986545 kubelet[2306]: E0213 15:39:38.986488 2306 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.033575 kubelet[2306]: E0213 15:39:39.033501 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="800ms" Feb 13 15:39:39.185810 systemd[1]: Created slice kubepods-burstable-poda002c877daf60d4b0f86ae7aa877d657.slice - libcontainer container kubepods-burstable-poda002c877daf60d4b0f86ae7aa877d657.slice. Feb 13 15:39:39.188862 kubelet[2306]: I0213 15:39:39.188812 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.190116 kubelet[2306]: E0213 15:39:39.190077 2306 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.194490 kubelet[2306]: E0213 15:39:39.194443 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.201229 systemd[1]: Created slice kubepods-burstable-pod4658dc78d307d7d4eb34acbab58e9ff5.slice - libcontainer container kubepods-burstable-pod4658dc78d307d7d4eb34acbab58e9ff5.slice. Feb 13 15:39:39.204306 kubelet[2306]: E0213 15:39:39.204058 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.206859 systemd[1]: Created slice kubepods-burstable-pod17ba183a9af8ed0dc820fb31b5d6c58e.slice - libcontainer container kubepods-burstable-pod17ba183a9af8ed0dc820fb31b5d6c58e.slice. Feb 13 15:39:39.209313 kubelet[2306]: E0213 15:39:39.209263 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.228841 kubelet[2306]: I0213 15:39:39.228756 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.228841 kubelet[2306]: I0213 15:39:39.228832 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.229108 kubelet[2306]: I0213 15:39:39.228863 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.229108 kubelet[2306]: I0213 15:39:39.228891 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a002c877daf60d4b0f86ae7aa877d657-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"a002c877daf60d4b0f86ae7aa877d657\") " pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.229108 kubelet[2306]: I0213 15:39:39.228916 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a002c877daf60d4b0f86ae7aa877d657-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"a002c877daf60d4b0f86ae7aa877d657\") " pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.229108 kubelet[2306]: I0213 15:39:39.228973 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a002c877daf60d4b0f86ae7aa877d657-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"a002c877daf60d4b0f86ae7aa877d657\") " pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.229304 kubelet[2306]: I0213 15:39:39.229007 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.229304 kubelet[2306]: I0213 15:39:39.229064 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.229304 kubelet[2306]: I0213 15:39:39.229096 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17ba183a9af8ed0dc820fb31b5d6c58e-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"17ba183a9af8ed0dc820fb31b5d6c58e\") " pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.275621 kubelet[2306]: W0213 15:39:39.275541 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Feb 13 15:39:39.275841 kubelet[2306]: E0213 15:39:39.275637 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:39.461926 kubelet[2306]: W0213 15:39:39.461690 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Feb 13 15:39:39.461926 kubelet[2306]: E0213 15:39:39.461835 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:39.497138 containerd[1504]: time="2025-02-13T15:39:39.497072813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,Uid:a002c877daf60d4b0f86ae7aa877d657,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:39.507395 containerd[1504]: time="2025-02-13T15:39:39.507304960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,Uid:4658dc78d307d7d4eb34acbab58e9ff5,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:39.511166 containerd[1504]: time="2025-02-13T15:39:39.511092087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,Uid:17ba183a9af8ed0dc820fb31b5d6c58e,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:39.579176 kubelet[2306]: W0213 15:39:39.578997 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Feb 13 15:39:39.579176 kubelet[2306]: E0213 15:39:39.579122 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:39.589402 kubelet[2306]: W0213 15:39:39.589298 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.26:6443: connect: connection refused Feb 13 15:39:39.589576 kubelet[2306]: E0213 15:39:39.589423 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:39.600415 kubelet[2306]: I0213 15:39:39.600350 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.600889 kubelet[2306]: E0213 15:39:39.600840 2306 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:39.836160 kubelet[2306]: E0213 15:39:39.835174 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.26:6443: connect: connection refused" interval="1.6s" Feb 13 15:39:39.864390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240499616.mount: Deactivated successfully. Feb 13 15:39:39.874127 containerd[1504]: time="2025-02-13T15:39:39.874046597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:39.876629 containerd[1504]: time="2025-02-13T15:39:39.876568571Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:39.878964 containerd[1504]: time="2025-02-13T15:39:39.878893368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 15:39:39.880220 containerd[1504]: time="2025-02-13T15:39:39.880153320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:39.883091 containerd[1504]: time="2025-02-13T15:39:39.883044634Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:39.887343 containerd[1504]: time="2025-02-13T15:39:39.886346696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:39.887343 containerd[1504]: time="2025-02-13T15:39:39.886993876Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:39.897091 containerd[1504]: time="2025-02-13T15:39:39.897039645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:39.899071 containerd[1504]: time="2025-02-13T15:39:39.898295375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 387.081086ms" Feb 13 15:39:39.900710 containerd[1504]: time="2025-02-13T15:39:39.900516018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 403.272368ms" Feb 13 15:39:39.905239 containerd[1504]: time="2025-02-13T15:39:39.905187827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 397.707223ms" Feb 13 15:39:39.958847 kubelet[2306]: E0213 15:39:39.958650 2306 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal.1823ceb9c768f2b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,UID:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 15:39:38.404958902 +0000 UTC m=+0.657066670,LastTimestamp:2025-02-13 15:39:38.404958902 +0000 UTC m=+0.657066670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,}" Feb 13 15:39:40.122066 containerd[1504]: time="2025-02-13T15:39:40.114402723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:40.122066 containerd[1504]: time="2025-02-13T15:39:40.118595689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:40.122066 containerd[1504]: time="2025-02-13T15:39:40.118621869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:40.122066 containerd[1504]: time="2025-02-13T15:39:40.118754646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:40.126001 containerd[1504]: time="2025-02-13T15:39:40.121756480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:40.126001 containerd[1504]: time="2025-02-13T15:39:40.121920855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:40.126001 containerd[1504]: time="2025-02-13T15:39:40.121986250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:40.126001 containerd[1504]: time="2025-02-13T15:39:40.122206414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:40.128663 containerd[1504]: time="2025-02-13T15:39:40.121615242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:40.128663 containerd[1504]: time="2025-02-13T15:39:40.121718376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:40.128663 containerd[1504]: time="2025-02-13T15:39:40.121747930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:40.128663 containerd[1504]: time="2025-02-13T15:39:40.121878208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:40.166177 systemd[1]: Started cri-containerd-cec13737b826eee446808ac5c449ff8e995ffc38870a954a2e2b5a1b1bfce395.scope - libcontainer container cec13737b826eee446808ac5c449ff8e995ffc38870a954a2e2b5a1b1bfce395. Feb 13 15:39:40.181710 systemd[1]: Started cri-containerd-dc5318044efa779fc63da276cadcc9c712d89b9767fbeffbcdec442d4ab953ce.scope - libcontainer container dc5318044efa779fc63da276cadcc9c712d89b9767fbeffbcdec442d4ab953ce. Feb 13 15:39:40.201339 systemd[1]: Started cri-containerd-9facdaf14f1dd87af66f0037bd958b24687f080ad0f3a253d5e750fe81b51fd2.scope - libcontainer container 9facdaf14f1dd87af66f0037bd958b24687f080ad0f3a253d5e750fe81b51fd2. Feb 13 15:39:40.293728 containerd[1504]: time="2025-02-13T15:39:40.293435852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,Uid:4658dc78d307d7d4eb34acbab58e9ff5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc5318044efa779fc63da276cadcc9c712d89b9767fbeffbcdec442d4ab953ce\"" Feb 13 15:39:40.301482 kubelet[2306]: E0213 15:39:40.301066 2306 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flat" Feb 13 15:39:40.306425 containerd[1504]: time="2025-02-13T15:39:40.306138613Z" level=info msg="CreateContainer within sandbox \"dc5318044efa779fc63da276cadcc9c712d89b9767fbeffbcdec442d4ab953ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:39:40.318406 containerd[1504]: time="2025-02-13T15:39:40.318302632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,Uid:a002c877daf60d4b0f86ae7aa877d657,Namespace:kube-system,Attempt:0,} returns sandbox id \"9facdaf14f1dd87af66f0037bd958b24687f080ad0f3a253d5e750fe81b51fd2\"" Feb 13 15:39:40.322409 kubelet[2306]: E0213 15:39:40.321869 2306 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-21291" Feb 13 15:39:40.330612 containerd[1504]: time="2025-02-13T15:39:40.330548340Z" level=info msg="CreateContainer within sandbox \"9facdaf14f1dd87af66f0037bd958b24687f080ad0f3a253d5e750fe81b51fd2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:39:40.334576 containerd[1504]: time="2025-02-13T15:39:40.334418387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal,Uid:17ba183a9af8ed0dc820fb31b5d6c58e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cec13737b826eee446808ac5c449ff8e995ffc38870a954a2e2b5a1b1bfce395\"" Feb 13 15:39:40.338094 kubelet[2306]: E0213 15:39:40.337691 2306 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-21291" Feb 13 15:39:40.340400 containerd[1504]: time="2025-02-13T15:39:40.340127907Z" level=info msg="CreateContainer within sandbox \"cec13737b826eee446808ac5c449ff8e995ffc38870a954a2e2b5a1b1bfce395\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:39:40.345469 containerd[1504]: time="2025-02-13T15:39:40.345409952Z" level=info msg="CreateContainer within sandbox \"dc5318044efa779fc63da276cadcc9c712d89b9767fbeffbcdec442d4ab953ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a31efb78a4b8892b882e2d1c02295a0ef790143e002a1821990a43d8e2d1ba02\"" Feb 13 15:39:40.346560 containerd[1504]: time="2025-02-13T15:39:40.346520752Z" level=info msg="StartContainer for \"a31efb78a4b8892b882e2d1c02295a0ef790143e002a1821990a43d8e2d1ba02\"" Feb 13 15:39:40.363524 containerd[1504]: time="2025-02-13T15:39:40.363289227Z" level=info msg="CreateContainer within sandbox \"9facdaf14f1dd87af66f0037bd958b24687f080ad0f3a253d5e750fe81b51fd2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c58a9522a320834b5675b0fd3b55e0b4639078d57c962c27f2aa369056a2574\"" Feb 13 15:39:40.366787 containerd[1504]: time="2025-02-13T15:39:40.366509993Z" level=info msg="StartContainer for \"4c58a9522a320834b5675b0fd3b55e0b4639078d57c962c27f2aa369056a2574\"" Feb 13 15:39:40.369140 containerd[1504]: time="2025-02-13T15:39:40.369077507Z" level=info msg="CreateContainer within sandbox \"cec13737b826eee446808ac5c449ff8e995ffc38870a954a2e2b5a1b1bfce395\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce3812220fdc969a95edf8ab11ac5c66beb9b0c59855c7eb189d056d3600d464\"" Feb 13 15:39:40.370828 containerd[1504]: time="2025-02-13T15:39:40.370651641Z" level=info msg="StartContainer for \"ce3812220fdc969a95edf8ab11ac5c66beb9b0c59855c7eb189d056d3600d464\"" Feb 13 15:39:40.407544 kubelet[2306]: I0213 15:39:40.407163 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:40.411850 kubelet[2306]: E0213 15:39:40.408834 2306 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.26:6443/api/v1/nodes\": dial tcp 10.128.0.26:6443: connect: connection refused" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:40.413827 systemd[1]: Started cri-containerd-a31efb78a4b8892b882e2d1c02295a0ef790143e002a1821990a43d8e2d1ba02.scope - libcontainer container a31efb78a4b8892b882e2d1c02295a0ef790143e002a1821990a43d8e2d1ba02. Feb 13 15:39:40.430214 kubelet[2306]: E0213 15:39:40.430102 2306 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.26:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:39:40.458679 systemd[1]: Started cri-containerd-4c58a9522a320834b5675b0fd3b55e0b4639078d57c962c27f2aa369056a2574.scope - libcontainer container 4c58a9522a320834b5675b0fd3b55e0b4639078d57c962c27f2aa369056a2574. Feb 13 15:39:40.471993 systemd[1]: Started cri-containerd-ce3812220fdc969a95edf8ab11ac5c66beb9b0c59855c7eb189d056d3600d464.scope - libcontainer container ce3812220fdc969a95edf8ab11ac5c66beb9b0c59855c7eb189d056d3600d464. Feb 13 15:39:40.541343 containerd[1504]: time="2025-02-13T15:39:40.541284573Z" level=info msg="StartContainer for \"a31efb78a4b8892b882e2d1c02295a0ef790143e002a1821990a43d8e2d1ba02\" returns successfully" Feb 13 15:39:40.601510 containerd[1504]: time="2025-02-13T15:39:40.601456547Z" level=info msg="StartContainer for \"4c58a9522a320834b5675b0fd3b55e0b4639078d57c962c27f2aa369056a2574\" returns successfully" Feb 13 15:39:40.655114 containerd[1504]: time="2025-02-13T15:39:40.654956262Z" level=info msg="StartContainer for \"ce3812220fdc969a95edf8ab11ac5c66beb9b0c59855c7eb189d056d3600d464\" returns successfully" Feb 13 15:39:41.522889 kubelet[2306]: E0213 15:39:41.522834 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:41.525396 kubelet[2306]: E0213 15:39:41.524837 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:41.525565 kubelet[2306]: E0213 15:39:41.525482 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:42.015603 kubelet[2306]: I0213 15:39:42.015422 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:42.524766 kubelet[2306]: E0213 15:39:42.524716 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:42.528427 kubelet[2306]: E0213 15:39:42.527631 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:42.528427 kubelet[2306]: E0213 15:39:42.528320 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:43.529840 kubelet[2306]: E0213 15:39:43.529785 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:43.531427 kubelet[2306]: E0213 15:39:43.530640 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:44.398413 kubelet[2306]: I0213 15:39:44.398335 2306 apiserver.go:52] "Watching apiserver" Feb 13 15:39:44.505298 kubelet[2306]: E0213 15:39:44.505217 2306 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:44.526053 kubelet[2306]: I0213 15:39:44.525994 2306 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:39:44.555419 kubelet[2306]: I0213 15:39:44.554466 2306 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:44.625393 kubelet[2306]: I0213 15:39:44.625292 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:44.677113 kubelet[2306]: E0213 15:39:44.676910 2306 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:44.677113 kubelet[2306]: I0213 15:39:44.676968 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:44.686831 kubelet[2306]: E0213 15:39:44.686746 2306 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:44.686831 kubelet[2306]: I0213 15:39:44.686827 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:44.692403 kubelet[2306]: E0213 15:39:44.691949 2306 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:45.887498 kubelet[2306]: I0213 15:39:45.887449 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:45.895185 kubelet[2306]: W0213 15:39:45.894618 2306 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:46.236772 systemd[1]: Reload requested from client PID 2586 ('systemctl') (unit session-7.scope)... Feb 13 15:39:46.236800 systemd[1]: Reloading... Feb 13 15:39:46.393568 zram_generator::config[2634]: No configuration found. Feb 13 15:39:46.428246 kubelet[2306]: I0213 15:39:46.427592 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:46.439199 kubelet[2306]: W0213 15:39:46.439158 2306 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:46.559078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:46.746316 systemd[1]: Reloading finished in 508 ms. Feb 13 15:39:46.782248 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:46.793541 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:39:46.793952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:46.794043 systemd[1]: kubelet.service: Consumed 1.218s CPU time, 124.8M memory peak. Feb 13 15:39:46.801889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:47.164429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:47.180239 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:47.269476 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:47.269476 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:47.269476 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:47.269476 kubelet[2680]: I0213 15:39:47.269122 2680 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:47.284706 kubelet[2680]: I0213 15:39:47.284652 2680 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 15:39:47.284706 kubelet[2680]: I0213 15:39:47.284699 2680 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:47.286221 kubelet[2680]: I0213 15:39:47.285220 2680 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 15:39:47.288408 kubelet[2680]: I0213 15:39:47.287952 2680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:39:47.292321 kubelet[2680]: I0213 15:39:47.292266 2680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:47.300461 kubelet[2680]: E0213 15:39:47.300301 2680 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:39:47.300943 kubelet[2680]: I0213 15:39:47.300723 2680 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:39:47.306100 kubelet[2680]: I0213 15:39:47.305989 2680 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:47.307304 kubelet[2680]: I0213 15:39:47.306628 2680 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:47.307304 kubelet[2680]: I0213 15:39:47.306672 2680 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:39:47.307304 kubelet[2680]: I0213 15:39:47.307002 2680 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:47.307304 kubelet[2680]: I0213 15:39:47.307021 2680 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 15:39:47.307757 kubelet[2680]: I0213 15:39:47.307087 2680 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:47.308835 kubelet[2680]: I0213 15:39:47.308800 2680 kubelet.go:446] "Attempting to sync node with API server" Feb 13 15:39:47.309041 kubelet[2680]: I0213 15:39:47.308994 2680 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:47.309185 kubelet[2680]: I0213 15:39:47.309171 2680 kubelet.go:352] "Adding apiserver pod source" Feb 13 15:39:47.310415 kubelet[2680]: I0213 15:39:47.309274 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:47.323160 sudo[2695]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:39:47.324595 sudo[2695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:39:47.344664 kubelet[2680]: I0213 15:39:47.343094 2680 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:47.349200 kubelet[2680]: I0213 15:39:47.349118 2680 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:47.350263 kubelet[2680]: I0213 15:39:47.350216 2680 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 15:39:47.350471 kubelet[2680]: I0213 15:39:47.350297 2680 server.go:1287] "Started kubelet" Feb 13 15:39:47.362757 kubelet[2680]: I0213 15:39:47.362717 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:47.369173 kubelet[2680]: E0213 15:39:47.369125 2680 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:39:47.371475 kubelet[2680]: I0213 15:39:47.369883 2680 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:47.377032 kubelet[2680]: I0213 15:39:47.376954 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:47.382349 kubelet[2680]: I0213 15:39:47.382248 2680 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:47.384880 kubelet[2680]: I0213 15:39:47.377521 2680 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:39:47.385139 kubelet[2680]: I0213 15:39:47.378627 2680 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:39:47.385496 kubelet[2680]: I0213 15:39:47.379928 2680 server.go:490] "Adding debug handlers to kubelet server" Feb 13 15:39:47.391835 kubelet[2680]: I0213 15:39:47.377497 2680 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 15:39:47.392477 kubelet[2680]: I0213 15:39:47.383931 2680 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:47.394427 kubelet[2680]: I0213 15:39:47.394169 2680 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:39:47.394834 kubelet[2680]: I0213 15:39:47.394699 2680 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:47.398512 kubelet[2680]: I0213 15:39:47.398489 2680 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:47.415227 kubelet[2680]: I0213 15:39:47.415150 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:47.428155 kubelet[2680]: I0213 15:39:47.428013 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:47.430627 kubelet[2680]: I0213 15:39:47.430144 2680 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 15:39:47.430627 kubelet[2680]: I0213 15:39:47.430245 2680 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 15:39:47.431819 kubelet[2680]: I0213 15:39:47.431799 2680 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 15:39:47.435143 kubelet[2680]: E0213 15:39:47.434996 2680 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:47.514661 kubelet[2680]: I0213 15:39:47.514619 2680 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 15:39:47.514661 kubelet[2680]: I0213 15:39:47.514649 2680 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:47.514661 kubelet[2680]: I0213 15:39:47.514679 2680 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:47.514993 kubelet[2680]: I0213 15:39:47.514957 2680 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:39:47.515061 kubelet[2680]: I0213 15:39:47.514976 2680 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:39:47.515061 kubelet[2680]: I0213 15:39:47.515011 2680 policy_none.go:49] "None policy: Start" Feb 13 15:39:47.515061 kubelet[2680]: I0213 15:39:47.515030 2680 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 15:39:47.515061 kubelet[2680]: I0213 15:39:47.515048 2680 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:47.515244 kubelet[2680]: I0213 15:39:47.515234 2680 state_mem.go:75] "Updated machine memory state" Feb 13 15:39:47.523906 kubelet[2680]: I0213 15:39:47.523729 2680 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:47.524642 kubelet[2680]: I0213 15:39:47.524237 2680 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:39:47.526193 kubelet[2680]: I0213 15:39:47.526133 2680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:39:47.527622 kubelet[2680]: I0213 15:39:47.527590 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:47.528254 kubelet[2680]: E0213 15:39:47.528212 2680 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 15:39:47.541669 kubelet[2680]: I0213 15:39:47.541629 2680 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.544150 kubelet[2680]: I0213 15:39:47.544120 2680 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.547554 kubelet[2680]: I0213 15:39:47.547243 2680 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.571842 kubelet[2680]: W0213 15:39:47.566599 2680 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:47.571842 kubelet[2680]: W0213 15:39:47.569599 2680 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:47.571842 kubelet[2680]: E0213 15:39:47.569670 2680 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.571842 kubelet[2680]: W0213 15:39:47.569751 2680 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:47.571842 kubelet[2680]: E0213 15:39:47.569785 2680 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596182 kubelet[2680]: I0213 15:39:47.595589 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596182 kubelet[2680]: I0213 15:39:47.595652 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596182 kubelet[2680]: I0213 15:39:47.595685 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596182 kubelet[2680]: I0213 15:39:47.595720 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17ba183a9af8ed0dc820fb31b5d6c58e-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"17ba183a9af8ed0dc820fb31b5d6c58e\") " pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596643 kubelet[2680]: I0213 15:39:47.595772 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596643 kubelet[2680]: I0213 15:39:47.595805 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4658dc78d307d7d4eb34acbab58e9ff5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"4658dc78d307d7d4eb34acbab58e9ff5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596643 kubelet[2680]: I0213 15:39:47.595849 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a002c877daf60d4b0f86ae7aa877d657-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"a002c877daf60d4b0f86ae7aa877d657\") " pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596643 kubelet[2680]: I0213 15:39:47.595881 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a002c877daf60d4b0f86ae7aa877d657-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"a002c877daf60d4b0f86ae7aa877d657\") " pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.596858 kubelet[2680]: I0213 15:39:47.595914 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a002c877daf60d4b0f86ae7aa877d657-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" (UID: \"a002c877daf60d4b0f86ae7aa877d657\") " pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.660394 kubelet[2680]: I0213 15:39:47.658604 2680 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.675559 kubelet[2680]: I0213 15:39:47.675497 2680 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:47.676672 kubelet[2680]: I0213 15:39:47.675955 2680 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:48.169994 sudo[2695]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:48.322409 kubelet[2680]: I0213 15:39:48.321081 2680 apiserver.go:52] "Watching apiserver" Feb 13 15:39:48.385732 kubelet[2680]: I0213 15:39:48.385211 2680 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:39:48.453125 kubelet[2680]: I0213 15:39:48.452999 2680 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:48.456396 kubelet[2680]: I0213 15:39:48.456213 2680 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:48.469941 kubelet[2680]: W0213 15:39:48.469668 2680 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:48.469941 kubelet[2680]: E0213 15:39:48.470030 2680 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:48.469941 kubelet[2680]: W0213 15:39:48.470353 2680 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:48.472018 kubelet[2680]: E0213 15:39:48.471986 2680 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" Feb 13 15:39:48.534045 kubelet[2680]: I0213 15:39:48.533540 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" podStartSLOduration=1.5335098660000002 podStartE2EDuration="1.533509866s" podCreationTimestamp="2025-02-13 15:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:48.521082638 +0000 UTC m=+1.331963729" watchObservedRunningTime="2025-02-13 15:39:48.533509866 +0000 UTC m=+1.344391181" Feb 13 15:39:48.549518 kubelet[2680]: I0213 15:39:48.548731 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" podStartSLOduration=2.548702672 podStartE2EDuration="2.548702672s" podCreationTimestamp="2025-02-13 15:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:48.53520856 +0000 UTC m=+1.346089651" watchObservedRunningTime="2025-02-13 15:39:48.548702672 +0000 UTC m=+1.359583761" Feb 13 15:39:48.551789 kubelet[2680]: I0213 15:39:48.551529 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" podStartSLOduration=3.551505916 podStartE2EDuration="3.551505916s" podCreationTimestamp="2025-02-13 15:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:48.550497083 +0000 UTC m=+1.361378165" watchObservedRunningTime="2025-02-13 15:39:48.551505916 +0000 UTC m=+1.362386999" Feb 13 15:39:50.484859 sudo[1754]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:50.527567 sshd[1753]: Connection closed by 139.178.68.195 port 51528 Feb 13 15:39:50.528495 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:50.536725 systemd[1]: sshd@6-10.128.0.26:22-139.178.68.195:51528.service: Deactivated successfully. Feb 13 15:39:50.539925 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:39:50.540234 systemd[1]: session-7.scope: Consumed 6.337s CPU time, 263.9M memory peak. Feb 13 15:39:50.542520 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:39:50.544203 systemd-logind[1480]: Removed session 7. Feb 13 15:39:51.018483 update_engine[1485]: I20250213 15:39:51.018160 1485 update_attempter.cc:509] Updating boot flags... Feb 13 15:39:51.103438 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2760) Feb 13 15:39:51.285974 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2761) Feb 13 15:39:51.467495 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2761) Feb 13 15:39:53.986712 kubelet[2680]: I0213 15:39:53.986668 2680 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:39:53.987499 containerd[1504]: time="2025-02-13T15:39:53.987251527Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:39:53.987963 kubelet[2680]: I0213 15:39:53.987922 2680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:39:54.831486 systemd[1]: Created slice kubepods-besteffort-pod73378ba8_be80_4f6d_8cd2_0e7bc27f386c.slice - libcontainer container kubepods-besteffort-pod73378ba8_be80_4f6d_8cd2_0e7bc27f386c.slice. Feb 13 15:39:54.847943 kubelet[2680]: I0213 15:39:54.847883 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73378ba8-be80-4f6d-8cd2-0e7bc27f386c-kube-proxy\") pod \"kube-proxy-hd6wp\" (UID: \"73378ba8-be80-4f6d-8cd2-0e7bc27f386c\") " pod="kube-system/kube-proxy-hd6wp" Feb 13 15:39:54.848144 kubelet[2680]: I0213 15:39:54.847946 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2sbc\" (UniqueName: \"kubernetes.io/projected/73378ba8-be80-4f6d-8cd2-0e7bc27f386c-kube-api-access-b2sbc\") pod \"kube-proxy-hd6wp\" (UID: \"73378ba8-be80-4f6d-8cd2-0e7bc27f386c\") " pod="kube-system/kube-proxy-hd6wp" Feb 13 15:39:54.848144 kubelet[2680]: I0213 15:39:54.847991 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73378ba8-be80-4f6d-8cd2-0e7bc27f386c-xtables-lock\") pod \"kube-proxy-hd6wp\" (UID: \"73378ba8-be80-4f6d-8cd2-0e7bc27f386c\") " pod="kube-system/kube-proxy-hd6wp" Feb 13 15:39:54.848144 kubelet[2680]: I0213 15:39:54.848015 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73378ba8-be80-4f6d-8cd2-0e7bc27f386c-lib-modules\") pod \"kube-proxy-hd6wp\" (UID: \"73378ba8-be80-4f6d-8cd2-0e7bc27f386c\") " pod="kube-system/kube-proxy-hd6wp" Feb 13 15:39:54.869087 systemd[1]: Created slice kubepods-burstable-podb610ad1e_8f4c_449f_beb7_c5b587e58f09.slice - libcontainer container kubepods-burstable-podb610ad1e_8f4c_449f_beb7_c5b587e58f09.slice. Feb 13 15:39:54.949530 kubelet[2680]: I0213 15:39:54.948526 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-hostproc\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949530 kubelet[2680]: I0213 15:39:54.948583 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-cgroup\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949530 kubelet[2680]: I0213 15:39:54.948609 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b610ad1e-8f4c-449f-beb7-c5b587e58f09-clustermesh-secrets\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949530 kubelet[2680]: I0213 15:39:54.948633 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-config-path\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949530 kubelet[2680]: I0213 15:39:54.948683 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cni-path\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949530 kubelet[2680]: I0213 15:39:54.948726 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-etc-cni-netd\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949988 kubelet[2680]: I0213 15:39:54.948816 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-lib-modules\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949988 kubelet[2680]: I0213 15:39:54.949689 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-run\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949988 kubelet[2680]: I0213 15:39:54.949768 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-xtables-lock\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949988 kubelet[2680]: I0213 15:39:54.949841 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-host-proc-sys-net\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.949988 kubelet[2680]: I0213 15:39:54.949895 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6brcz\" (UniqueName: \"kubernetes.io/projected/b610ad1e-8f4c-449f-beb7-c5b587e58f09-kube-api-access-6brcz\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.950246 kubelet[2680]: I0213 15:39:54.949985 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-bpf-maps\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.950246 kubelet[2680]: I0213 15:39:54.950045 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-host-proc-sys-kernel\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:54.950246 kubelet[2680]: I0213 15:39:54.950095 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b610ad1e-8f4c-449f-beb7-c5b587e58f09-hubble-tls\") pod \"cilium-xrhr5\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " pod="kube-system/cilium-xrhr5" Feb 13 15:39:55.138901 kubelet[2680]: I0213 15:39:55.138809 2680 status_manager.go:890] "Failed to get status for pod" podUID="d4716e82-7dfc-4609-ba52-24c5467a7bdb" pod="kube-system/cilium-operator-6c4d7847fc-g55p9" err="pods \"cilium-operator-6c4d7847fc-g55p9\" is forbidden: User \"system:node:ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal' and this object" Feb 13 15:39:55.144626 containerd[1504]: time="2025-02-13T15:39:55.144571815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hd6wp,Uid:73378ba8-be80-4f6d-8cd2-0e7bc27f386c,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:55.152310 kubelet[2680]: I0213 15:39:55.151631 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krgrm\" (UniqueName: \"kubernetes.io/projected/d4716e82-7dfc-4609-ba52-24c5467a7bdb-kube-api-access-krgrm\") pod \"cilium-operator-6c4d7847fc-g55p9\" (UID: \"d4716e82-7dfc-4609-ba52-24c5467a7bdb\") " pod="kube-system/cilium-operator-6c4d7847fc-g55p9" Feb 13 15:39:55.152310 kubelet[2680]: I0213 15:39:55.151698 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4716e82-7dfc-4609-ba52-24c5467a7bdb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g55p9\" (UID: \"d4716e82-7dfc-4609-ba52-24c5467a7bdb\") " pod="kube-system/cilium-operator-6c4d7847fc-g55p9" Feb 13 15:39:55.151786 systemd[1]: Created slice kubepods-besteffort-podd4716e82_7dfc_4609_ba52_24c5467a7bdb.slice - libcontainer container kubepods-besteffort-podd4716e82_7dfc_4609_ba52_24c5467a7bdb.slice. Feb 13 15:39:55.177399 containerd[1504]: time="2025-02-13T15:39:55.177195337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xrhr5,Uid:b610ad1e-8f4c-449f-beb7-c5b587e58f09,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:55.189172 containerd[1504]: time="2025-02-13T15:39:55.189007418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:55.189172 containerd[1504]: time="2025-02-13T15:39:55.189089590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:55.189172 containerd[1504]: time="2025-02-13T15:39:55.189126814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:55.189621 containerd[1504]: time="2025-02-13T15:39:55.189330677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:55.226172 systemd[1]: Started cri-containerd-710cdf015b8ab9221f8d7345473a5fc0758795c0135efd2a6cef167906669144.scope - libcontainer container 710cdf015b8ab9221f8d7345473a5fc0758795c0135efd2a6cef167906669144. Feb 13 15:39:55.234456 containerd[1504]: time="2025-02-13T15:39:55.234089545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:55.234456 containerd[1504]: time="2025-02-13T15:39:55.234160117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:55.234456 containerd[1504]: time="2025-02-13T15:39:55.234179795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:55.234456 containerd[1504]: time="2025-02-13T15:39:55.234314220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:55.282719 systemd[1]: Started cri-containerd-2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae.scope - libcontainer container 2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae. Feb 13 15:39:55.305652 containerd[1504]: time="2025-02-13T15:39:55.305546101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hd6wp,Uid:73378ba8-be80-4f6d-8cd2-0e7bc27f386c,Namespace:kube-system,Attempt:0,} returns sandbox id \"710cdf015b8ab9221f8d7345473a5fc0758795c0135efd2a6cef167906669144\"" Feb 13 15:39:55.313697 containerd[1504]: time="2025-02-13T15:39:55.313500099Z" level=info msg="CreateContainer within sandbox \"710cdf015b8ab9221f8d7345473a5fc0758795c0135efd2a6cef167906669144\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:39:55.333783 containerd[1504]: time="2025-02-13T15:39:55.333575740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xrhr5,Uid:b610ad1e-8f4c-449f-beb7-c5b587e58f09,Namespace:kube-system,Attempt:0,} returns sandbox id \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\"" Feb 13 15:39:55.336810 containerd[1504]: time="2025-02-13T15:39:55.336771624Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:39:55.342421 containerd[1504]: time="2025-02-13T15:39:55.342349813Z" level=info msg="CreateContainer within sandbox \"710cdf015b8ab9221f8d7345473a5fc0758795c0135efd2a6cef167906669144\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"034883ffb4c14aa6593a37be50c9134a3e9f2454b9fa05e9516bc42b083b64cd\"" Feb 13 15:39:55.344866 containerd[1504]: time="2025-02-13T15:39:55.344827581Z" level=info msg="StartContainer for \"034883ffb4c14aa6593a37be50c9134a3e9f2454b9fa05e9516bc42b083b64cd\"" Feb 13 15:39:55.390758 systemd[1]: Started cri-containerd-034883ffb4c14aa6593a37be50c9134a3e9f2454b9fa05e9516bc42b083b64cd.scope - libcontainer container 034883ffb4c14aa6593a37be50c9134a3e9f2454b9fa05e9516bc42b083b64cd. Feb 13 15:39:55.434668 containerd[1504]: time="2025-02-13T15:39:55.433518294Z" level=info msg="StartContainer for \"034883ffb4c14aa6593a37be50c9134a3e9f2454b9fa05e9516bc42b083b64cd\" returns successfully" Feb 13 15:39:55.456825 containerd[1504]: time="2025-02-13T15:39:55.456776863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g55p9,Uid:d4716e82-7dfc-4609-ba52-24c5467a7bdb,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:55.513848 containerd[1504]: time="2025-02-13T15:39:55.512752339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:55.513848 containerd[1504]: time="2025-02-13T15:39:55.512856715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:55.513848 containerd[1504]: time="2025-02-13T15:39:55.512880251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:55.519079 containerd[1504]: time="2025-02-13T15:39:55.514179396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:55.555638 systemd[1]: Started cri-containerd-9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177.scope - libcontainer container 9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177. Feb 13 15:39:55.635208 containerd[1504]: time="2025-02-13T15:39:55.635149628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g55p9,Uid:d4716e82-7dfc-4609-ba52-24c5467a7bdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\"" Feb 13 15:39:57.934566 kubelet[2680]: I0213 15:39:57.934434 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hd6wp" podStartSLOduration=3.934399693 podStartE2EDuration="3.934399693s" podCreationTimestamp="2025-02-13 15:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:55.531029613 +0000 UTC m=+8.341910704" watchObservedRunningTime="2025-02-13 15:39:57.934399693 +0000 UTC m=+10.745280783" Feb 13 15:40:00.857706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866626028.mount: Deactivated successfully. Feb 13 15:40:03.561001 containerd[1504]: time="2025-02-13T15:40:03.560931773Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:03.562711 containerd[1504]: time="2025-02-13T15:40:03.562638906Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:40:03.567402 containerd[1504]: time="2025-02-13T15:40:03.566499918Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:03.571720 containerd[1504]: time="2025-02-13T15:40:03.571672749Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.23446655s" Feb 13 15:40:03.571852 containerd[1504]: time="2025-02-13T15:40:03.571725669Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:40:03.574559 containerd[1504]: time="2025-02-13T15:40:03.574513705Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:40:03.575518 containerd[1504]: time="2025-02-13T15:40:03.575479571Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:40:03.595795 containerd[1504]: time="2025-02-13T15:40:03.595743909Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\"" Feb 13 15:40:03.596763 containerd[1504]: time="2025-02-13T15:40:03.596722868Z" level=info msg="StartContainer for \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\"" Feb 13 15:40:03.642593 systemd[1]: Started cri-containerd-35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60.scope - libcontainer container 35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60. Feb 13 15:40:03.677283 containerd[1504]: time="2025-02-13T15:40:03.677232346Z" level=info msg="StartContainer for \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\" returns successfully" Feb 13 15:40:03.696877 systemd[1]: cri-containerd-35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60.scope: Deactivated successfully. Feb 13 15:40:04.589580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60-rootfs.mount: Deactivated successfully. Feb 13 15:40:05.533722 containerd[1504]: time="2025-02-13T15:40:05.533626800Z" level=info msg="shim disconnected" id=35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60 namespace=k8s.io Feb 13 15:40:05.533722 containerd[1504]: time="2025-02-13T15:40:05.533712763Z" level=warning msg="cleaning up after shim disconnected" id=35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60 namespace=k8s.io Feb 13 15:40:05.533722 containerd[1504]: time="2025-02-13T15:40:05.533729149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:05.908634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41501077.mount: Deactivated successfully. Feb 13 15:40:06.530959 containerd[1504]: time="2025-02-13T15:40:06.529807976Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:40:06.566108 containerd[1504]: time="2025-02-13T15:40:06.566052122Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\"" Feb 13 15:40:06.569350 containerd[1504]: time="2025-02-13T15:40:06.567956561Z" level=info msg="StartContainer for \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\"" Feb 13 15:40:06.638977 systemd[1]: Started cri-containerd-bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f.scope - libcontainer container bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f. Feb 13 15:40:06.702168 containerd[1504]: time="2025-02-13T15:40:06.702116032Z" level=info msg="StartContainer for \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\" returns successfully" Feb 13 15:40:06.724153 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:40:06.726231 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:40:06.726533 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:40:06.739148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:40:06.740093 systemd[1]: cri-containerd-bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f.scope: Deactivated successfully. Feb 13 15:40:06.778292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:40:06.891842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f-rootfs.mount: Deactivated successfully. Feb 13 15:40:06.967637 containerd[1504]: time="2025-02-13T15:40:06.967474071Z" level=info msg="shim disconnected" id=bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f namespace=k8s.io Feb 13 15:40:06.967637 containerd[1504]: time="2025-02-13T15:40:06.967558356Z" level=warning msg="cleaning up after shim disconnected" id=bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f namespace=k8s.io Feb 13 15:40:06.967637 containerd[1504]: time="2025-02-13T15:40:06.967575112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:06.992522 containerd[1504]: time="2025-02-13T15:40:06.992136643Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:40:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:40:07.012122 containerd[1504]: time="2025-02-13T15:40:07.012041317Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:07.013552 containerd[1504]: time="2025-02-13T15:40:07.013487297Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:40:07.015105 containerd[1504]: time="2025-02-13T15:40:07.015009698Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:07.017530 containerd[1504]: time="2025-02-13T15:40:07.017467156Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.442902675s" Feb 13 15:40:07.018066 containerd[1504]: time="2025-02-13T15:40:07.017535887Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:40:07.022183 containerd[1504]: time="2025-02-13T15:40:07.022147113Z" level=info msg="CreateContainer within sandbox \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:40:07.051699 containerd[1504]: time="2025-02-13T15:40:07.051633035Z" level=info msg="CreateContainer within sandbox \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\"" Feb 13 15:40:07.052677 containerd[1504]: time="2025-02-13T15:40:07.052624132Z" level=info msg="StartContainer for \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\"" Feb 13 15:40:07.107766 systemd[1]: Started cri-containerd-7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc.scope - libcontainer container 7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc. Feb 13 15:40:07.156753 containerd[1504]: time="2025-02-13T15:40:07.156581418Z" level=info msg="StartContainer for \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\" returns successfully" Feb 13 15:40:07.540917 containerd[1504]: time="2025-02-13T15:40:07.540729137Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:40:07.578213 containerd[1504]: time="2025-02-13T15:40:07.578144671Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\"" Feb 13 15:40:07.583577 containerd[1504]: time="2025-02-13T15:40:07.580165173Z" level=info msg="StartContainer for \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\"" Feb 13 15:40:07.686868 systemd[1]: Started cri-containerd-278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036.scope - libcontainer container 278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036. Feb 13 15:40:07.842933 containerd[1504]: time="2025-02-13T15:40:07.842868779Z" level=info msg="StartContainer for \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\" returns successfully" Feb 13 15:40:07.848450 systemd[1]: cri-containerd-278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036.scope: Deactivated successfully. Feb 13 15:40:07.898116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717064855.mount: Deactivated successfully. Feb 13 15:40:07.913651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036-rootfs.mount: Deactivated successfully. Feb 13 15:40:07.928288 containerd[1504]: time="2025-02-13T15:40:07.928188130Z" level=info msg="shim disconnected" id=278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036 namespace=k8s.io Feb 13 15:40:07.928288 containerd[1504]: time="2025-02-13T15:40:07.928287568Z" level=warning msg="cleaning up after shim disconnected" id=278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036 namespace=k8s.io Feb 13 15:40:07.928288 containerd[1504]: time="2025-02-13T15:40:07.928302093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:07.962046 containerd[1504]: time="2025-02-13T15:40:07.961939838Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:40:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:40:08.555959 containerd[1504]: time="2025-02-13T15:40:08.555725883Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:40:08.581165 containerd[1504]: time="2025-02-13T15:40:08.580930488Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\"" Feb 13 15:40:08.583020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040525054.mount: Deactivated successfully. Feb 13 15:40:08.588842 containerd[1504]: time="2025-02-13T15:40:08.586701733Z" level=info msg="StartContainer for \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\"" Feb 13 15:40:08.647921 kubelet[2680]: I0213 15:40:08.647217 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g55p9" podStartSLOduration=2.266042832 podStartE2EDuration="13.64717907s" podCreationTimestamp="2025-02-13 15:39:55 +0000 UTC" firstStartedPulling="2025-02-13 15:39:55.637973679 +0000 UTC m=+8.448854759" lastFinishedPulling="2025-02-13 15:40:07.019109928 +0000 UTC m=+19.829990997" observedRunningTime="2025-02-13 15:40:07.732483459 +0000 UTC m=+20.543364544" watchObservedRunningTime="2025-02-13 15:40:08.64717907 +0000 UTC m=+21.458060160" Feb 13 15:40:08.707670 systemd[1]: Started cri-containerd-d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5.scope - libcontainer container d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5. Feb 13 15:40:08.758669 systemd[1]: cri-containerd-d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5.scope: Deactivated successfully. Feb 13 15:40:08.761321 containerd[1504]: time="2025-02-13T15:40:08.761246914Z" level=info msg="StartContainer for \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\" returns successfully" Feb 13 15:40:08.798342 containerd[1504]: time="2025-02-13T15:40:08.798217013Z" level=info msg="shim disconnected" id=d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5 namespace=k8s.io Feb 13 15:40:08.798342 containerd[1504]: time="2025-02-13T15:40:08.798323350Z" level=warning msg="cleaning up after shim disconnected" id=d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5 namespace=k8s.io Feb 13 15:40:08.798342 containerd[1504]: time="2025-02-13T15:40:08.798340705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:08.892067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5-rootfs.mount: Deactivated successfully. Feb 13 15:40:09.561484 containerd[1504]: time="2025-02-13T15:40:09.561185388Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:40:09.595568 containerd[1504]: time="2025-02-13T15:40:09.595515196Z" level=info msg="CreateContainer within sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\"" Feb 13 15:40:09.598124 containerd[1504]: time="2025-02-13T15:40:09.596982363Z" level=info msg="StartContainer for \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\"" Feb 13 15:40:09.645693 systemd[1]: Started cri-containerd-29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5.scope - libcontainer container 29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5. Feb 13 15:40:09.693812 containerd[1504]: time="2025-02-13T15:40:09.693592384Z" level=info msg="StartContainer for \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\" returns successfully" Feb 13 15:40:09.910070 kubelet[2680]: I0213 15:40:09.909641 2680 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 15:40:09.978112 systemd[1]: Created slice kubepods-burstable-pode253d7d5_901b_47ec_9c74_0cd5ed7324d8.slice - libcontainer container kubepods-burstable-pode253d7d5_901b_47ec_9c74_0cd5ed7324d8.slice. Feb 13 15:40:09.995696 systemd[1]: Created slice kubepods-burstable-pod77015497_8753_422e_a3bf_8bd792e5c1c1.slice - libcontainer container kubepods-burstable-pod77015497_8753_422e_a3bf_8bd792e5c1c1.slice. Feb 13 15:40:10.067973 kubelet[2680]: I0213 15:40:10.067694 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77015497-8753-422e-a3bf-8bd792e5c1c1-config-volume\") pod \"coredns-668d6bf9bc-cvnq9\" (UID: \"77015497-8753-422e-a3bf-8bd792e5c1c1\") " pod="kube-system/coredns-668d6bf9bc-cvnq9" Feb 13 15:40:10.067973 kubelet[2680]: I0213 15:40:10.067770 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbf4n\" (UniqueName: \"kubernetes.io/projected/e253d7d5-901b-47ec-9c74-0cd5ed7324d8-kube-api-access-rbf4n\") pod \"coredns-668d6bf9bc-wrxm9\" (UID: \"e253d7d5-901b-47ec-9c74-0cd5ed7324d8\") " pod="kube-system/coredns-668d6bf9bc-wrxm9" Feb 13 15:40:10.067973 kubelet[2680]: I0213 15:40:10.067809 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmvr\" (UniqueName: \"kubernetes.io/projected/77015497-8753-422e-a3bf-8bd792e5c1c1-kube-api-access-2rmvr\") pod \"coredns-668d6bf9bc-cvnq9\" (UID: \"77015497-8753-422e-a3bf-8bd792e5c1c1\") " pod="kube-system/coredns-668d6bf9bc-cvnq9" Feb 13 15:40:10.067973 kubelet[2680]: I0213 15:40:10.067843 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e253d7d5-901b-47ec-9c74-0cd5ed7324d8-config-volume\") pod \"coredns-668d6bf9bc-wrxm9\" (UID: \"e253d7d5-901b-47ec-9c74-0cd5ed7324d8\") " pod="kube-system/coredns-668d6bf9bc-wrxm9" Feb 13 15:40:10.289579 containerd[1504]: time="2025-02-13T15:40:10.289404858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wrxm9,Uid:e253d7d5-901b-47ec-9c74-0cd5ed7324d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:10.304683 containerd[1504]: time="2025-02-13T15:40:10.304143208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvnq9,Uid:77015497-8753-422e-a3bf-8bd792e5c1c1,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:12.128176 systemd-networkd[1397]: cilium_host: Link UP Feb 13 15:40:12.129917 systemd-networkd[1397]: cilium_net: Link UP Feb 13 15:40:12.130273 systemd-networkd[1397]: cilium_net: Gained carrier Feb 13 15:40:12.131803 systemd-networkd[1397]: cilium_host: Gained carrier Feb 13 15:40:12.233279 systemd-networkd[1397]: cilium_host: Gained IPv6LL Feb 13 15:40:12.292137 systemd-networkd[1397]: cilium_vxlan: Link UP Feb 13 15:40:12.292158 systemd-networkd[1397]: cilium_vxlan: Gained carrier Feb 13 15:40:12.377254 systemd-networkd[1397]: cilium_net: Gained IPv6LL Feb 13 15:40:12.593916 kernel: NET: Registered PF_ALG protocol family Feb 13 15:40:13.546908 systemd-networkd[1397]: lxc_health: Link UP Feb 13 15:40:13.562938 systemd-networkd[1397]: lxc_health: Gained carrier Feb 13 15:40:13.893518 systemd-networkd[1397]: lxc14752de7ddb3: Link UP Feb 13 15:40:13.900621 kernel: eth0: renamed from tmp6eb55 Feb 13 15:40:13.915680 systemd-networkd[1397]: lxc14752de7ddb3: Gained carrier Feb 13 15:40:13.915968 systemd-networkd[1397]: cilium_vxlan: Gained IPv6LL Feb 13 15:40:13.962208 kernel: eth0: renamed from tmp56820 Feb 13 15:40:13.960695 systemd-networkd[1397]: lxcce4bd857fa65: Link UP Feb 13 15:40:13.967402 systemd-networkd[1397]: lxcce4bd857fa65: Gained carrier Feb 13 15:40:15.220336 kubelet[2680]: I0213 15:40:15.219616 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xrhr5" podStartSLOduration=12.982322953 podStartE2EDuration="21.219584963s" podCreationTimestamp="2025-02-13 15:39:54 +0000 UTC" firstStartedPulling="2025-02-13 15:39:55.335802263 +0000 UTC m=+8.146683348" lastFinishedPulling="2025-02-13 15:40:03.573064278 +0000 UTC m=+16.383945358" observedRunningTime="2025-02-13 15:40:10.587667726 +0000 UTC m=+23.398548830" watchObservedRunningTime="2025-02-13 15:40:15.219584963 +0000 UTC m=+28.030466051" Feb 13 15:40:15.577088 systemd-networkd[1397]: lxc_health: Gained IPv6LL Feb 13 15:40:15.704652 systemd-networkd[1397]: lxcce4bd857fa65: Gained IPv6LL Feb 13 15:40:15.960654 systemd-networkd[1397]: lxc14752de7ddb3: Gained IPv6LL Feb 13 15:40:16.454825 systemd[1]: Started sshd@8-10.128.0.26:22-218.92.0.190:27387.service - OpenSSH per-connection server daemon (218.92.0.190:27387). Feb 13 15:40:17.835603 sshd[3900]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:40:18.062211 sshd[3900]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:40:18.658535 systemd[1]: Started sshd@9-10.128.0.26:22-218.92.0.204:51144.service - OpenSSH per-connection server daemon (218.92.0.204:51144). Feb 13 15:40:18.673906 ntpd[1469]: Listen normally on 8 cilium_host 192.168.0.82:123 Feb 13 15:40:18.674041 ntpd[1469]: Listen normally on 9 cilium_net [fe80::28ef:d4ff:fed7:5c07%4]:123 Feb 13 15:40:18.674522 ntpd[1469]: 13 Feb 15:40:18 ntpd[1469]: Listen normally on 8 cilium_host 192.168.0.82:123 Feb 13 15:40:18.674522 ntpd[1469]: 13 Feb 15:40:18 ntpd[1469]: Listen normally on 9 cilium_net [fe80::28ef:d4ff:fed7:5c07%4]:123 Feb 13 15:40:18.674522 ntpd[1469]: 13 Feb 15:40:18 ntpd[1469]: Listen normally on 10 cilium_host [fe80::cbc:97ff:fe4d:40a8%5]:123 Feb 13 15:40:18.674522 ntpd[1469]: 13 Feb 15:40:18 ntpd[1469]: Listen normally on 11 cilium_vxlan [fe80::803f:cff:fed3:b087%6]:123 Feb 13 15:40:18.674522 ntpd[1469]: 13 Feb 15:40:18 ntpd[1469]: Listen normally on 12 lxc_health [fe80::bc37:34ff:fe19:47c8%8]:123 Feb 13 15:40:18.674522 ntpd[1469]: 13 Feb 15:40:18 ntpd[1469]: Listen normally on 13 lxc14752de7ddb3 [fe80::a067:94ff:fe7c:235b%10]:123 Feb 13 15:40:18.674522 ntpd[1469]: 13 Feb 15:40:18 ntpd[1469]: Listen normally on 14 lxcce4bd857fa65 [fe80::744c:1cff:fe15:dcc9%12]:123 Feb 13 15:40:18.674132 ntpd[1469]: Listen normally on 10 cilium_host [fe80::cbc:97ff:fe4d:40a8%5]:123 Feb 13 15:40:18.674196 ntpd[1469]: Listen normally on 11 cilium_vxlan [fe80::803f:cff:fed3:b087%6]:123 Feb 13 15:40:18.674258 ntpd[1469]: Listen normally on 12 lxc_health [fe80::bc37:34ff:fe19:47c8%8]:123 Feb 13 15:40:18.674320 ntpd[1469]: Listen normally on 13 lxc14752de7ddb3 [fe80::a067:94ff:fe7c:235b%10]:123 Feb 13 15:40:18.674395 ntpd[1469]: Listen normally on 14 lxcce4bd857fa65 [fe80::744c:1cff:fe15:dcc9%12]:123 Feb 13 15:40:18.773090 sshd[3900]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:40:18.923583 sshd[3908]: Unable to negotiate with 218.92.0.204 port 51144: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 13 15:40:18.926969 systemd[1]: sshd@9-10.128.0.26:22-218.92.0.204:51144.service: Deactivated successfully. Feb 13 15:40:18.999448 sshd[3900]: Received disconnect from 218.92.0.190 port 27387:11: [preauth] Feb 13 15:40:18.999448 sshd[3900]: Disconnected from authenticating user root 218.92.0.190 port 27387 [preauth] Feb 13 15:40:19.002303 systemd[1]: sshd@8-10.128.0.26:22-218.92.0.190:27387.service: Deactivated successfully. Feb 13 15:40:19.622278 containerd[1504]: time="2025-02-13T15:40:19.621841095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:19.622278 containerd[1504]: time="2025-02-13T15:40:19.621934865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:19.622278 containerd[1504]: time="2025-02-13T15:40:19.621962875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:19.622278 containerd[1504]: time="2025-02-13T15:40:19.622097832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:19.657665 containerd[1504]: time="2025-02-13T15:40:19.657216765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:19.657665 containerd[1504]: time="2025-02-13T15:40:19.657521065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:19.657665 containerd[1504]: time="2025-02-13T15:40:19.657568806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:19.658145 containerd[1504]: time="2025-02-13T15:40:19.657718123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:19.717666 systemd[1]: Started cri-containerd-56820dd50f96ff4edcbd59dbeeb7aa60bfd87bd6504ab119698850f2c1cec9ec.scope - libcontainer container 56820dd50f96ff4edcbd59dbeeb7aa60bfd87bd6504ab119698850f2c1cec9ec. Feb 13 15:40:19.727696 systemd[1]: Started cri-containerd-6eb554c270b7edee21df6be23d76ed99cbd8e5aeaf58bbd69413c0304bb8f87f.scope - libcontainer container 6eb554c270b7edee21df6be23d76ed99cbd8e5aeaf58bbd69413c0304bb8f87f. Feb 13 15:40:19.852647 containerd[1504]: time="2025-02-13T15:40:19.852564579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wrxm9,Uid:e253d7d5-901b-47ec-9c74-0cd5ed7324d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eb554c270b7edee21df6be23d76ed99cbd8e5aeaf58bbd69413c0304bb8f87f\"" Feb 13 15:40:19.866844 containerd[1504]: time="2025-02-13T15:40:19.866741956Z" level=info msg="CreateContainer within sandbox \"6eb554c270b7edee21df6be23d76ed99cbd8e5aeaf58bbd69413c0304bb8f87f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:40:19.893753 containerd[1504]: time="2025-02-13T15:40:19.893555737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvnq9,Uid:77015497-8753-422e-a3bf-8bd792e5c1c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"56820dd50f96ff4edcbd59dbeeb7aa60bfd87bd6504ab119698850f2c1cec9ec\"" Feb 13 15:40:19.908673 containerd[1504]: time="2025-02-13T15:40:19.907683849Z" level=info msg="CreateContainer within sandbox \"56820dd50f96ff4edcbd59dbeeb7aa60bfd87bd6504ab119698850f2c1cec9ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:40:19.919068 containerd[1504]: time="2025-02-13T15:40:19.919013225Z" level=info msg="CreateContainer within sandbox \"6eb554c270b7edee21df6be23d76ed99cbd8e5aeaf58bbd69413c0304bb8f87f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f585c2d032c4ddec2d1bb22b1f06ac01289b552c117ca9cc1acbe14125a2872\"" Feb 13 15:40:19.921411 containerd[1504]: time="2025-02-13T15:40:19.920823501Z" level=info msg="StartContainer for \"6f585c2d032c4ddec2d1bb22b1f06ac01289b552c117ca9cc1acbe14125a2872\"" Feb 13 15:40:19.972858 containerd[1504]: time="2025-02-13T15:40:19.972746495Z" level=info msg="CreateContainer within sandbox \"56820dd50f96ff4edcbd59dbeeb7aa60bfd87bd6504ab119698850f2c1cec9ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f817fb3d61fab27d1de32eb8fba580bd9db4b576c6971d44d329905420d624f\"" Feb 13 15:40:19.974799 containerd[1504]: time="2025-02-13T15:40:19.974337407Z" level=info msg="StartContainer for \"7f817fb3d61fab27d1de32eb8fba580bd9db4b576c6971d44d329905420d624f\"" Feb 13 15:40:20.020050 systemd[1]: Started cri-containerd-6f585c2d032c4ddec2d1bb22b1f06ac01289b552c117ca9cc1acbe14125a2872.scope - libcontainer container 6f585c2d032c4ddec2d1bb22b1f06ac01289b552c117ca9cc1acbe14125a2872. Feb 13 15:40:20.042739 systemd[1]: Started cri-containerd-7f817fb3d61fab27d1de32eb8fba580bd9db4b576c6971d44d329905420d624f.scope - libcontainer container 7f817fb3d61fab27d1de32eb8fba580bd9db4b576c6971d44d329905420d624f. Feb 13 15:40:20.100397 containerd[1504]: time="2025-02-13T15:40:20.100310846Z" level=info msg="StartContainer for \"6f585c2d032c4ddec2d1bb22b1f06ac01289b552c117ca9cc1acbe14125a2872\" returns successfully" Feb 13 15:40:20.109161 containerd[1504]: time="2025-02-13T15:40:20.108999969Z" level=info msg="StartContainer for \"7f817fb3d61fab27d1de32eb8fba580bd9db4b576c6971d44d329905420d624f\" returns successfully" Feb 13 15:40:20.638403 kubelet[2680]: I0213 15:40:20.633875 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cvnq9" podStartSLOduration=25.633840335 podStartE2EDuration="25.633840335s" podCreationTimestamp="2025-02-13 15:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:20.631854618 +0000 UTC m=+33.442735713" watchObservedRunningTime="2025-02-13 15:40:20.633840335 +0000 UTC m=+33.444721428" Feb 13 15:40:20.664706 kubelet[2680]: I0213 15:40:20.663494 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wrxm9" podStartSLOduration=25.663464609000002 podStartE2EDuration="25.663464609s" podCreationTimestamp="2025-02-13 15:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:20.661804802 +0000 UTC m=+33.472685894" watchObservedRunningTime="2025-02-13 15:40:20.663464609 +0000 UTC m=+33.474345699" Feb 13 15:40:35.727159 systemd[1]: Started sshd@10-10.128.0.26:22-139.178.68.195:44480.service - OpenSSH per-connection server daemon (139.178.68.195:44480). Feb 13 15:40:36.029768 sshd[4092]: Accepted publickey for core from 139.178.68.195 port 44480 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:36.032080 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:36.039608 systemd-logind[1480]: New session 8 of user core. Feb 13 15:40:36.046782 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:40:36.353688 sshd[4094]: Connection closed by 139.178.68.195 port 44480 Feb 13 15:40:36.354823 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:36.361045 systemd[1]: sshd@10-10.128.0.26:22-139.178.68.195:44480.service: Deactivated successfully. Feb 13 15:40:36.364925 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:40:36.367453 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:40:36.369111 systemd-logind[1480]: Removed session 8. Feb 13 15:40:41.412825 systemd[1]: Started sshd@11-10.128.0.26:22-139.178.68.195:33314.service - OpenSSH per-connection server daemon (139.178.68.195:33314). Feb 13 15:40:41.717549 sshd[4107]: Accepted publickey for core from 139.178.68.195 port 33314 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:41.719326 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:41.726858 systemd-logind[1480]: New session 9 of user core. Feb 13 15:40:41.731619 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:40:42.027919 sshd[4109]: Connection closed by 139.178.68.195 port 33314 Feb 13 15:40:42.029531 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:42.036650 systemd[1]: sshd@11-10.128.0.26:22-139.178.68.195:33314.service: Deactivated successfully. Feb 13 15:40:42.039924 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:40:42.041464 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:40:42.043325 systemd-logind[1480]: Removed session 9. Feb 13 15:40:47.092880 systemd[1]: Started sshd@12-10.128.0.26:22-139.178.68.195:37578.service - OpenSSH per-connection server daemon (139.178.68.195:37578). Feb 13 15:40:47.397158 sshd[4122]: Accepted publickey for core from 139.178.68.195 port 37578 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:47.399243 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:47.405921 systemd-logind[1480]: New session 10 of user core. Feb 13 15:40:47.411668 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:40:47.704266 sshd[4124]: Connection closed by 139.178.68.195 port 37578 Feb 13 15:40:47.705679 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:47.710970 systemd[1]: sshd@12-10.128.0.26:22-139.178.68.195:37578.service: Deactivated successfully. Feb 13 15:40:47.714563 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:40:47.717192 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:40:47.719495 systemd-logind[1480]: Removed session 10. Feb 13 15:40:52.764804 systemd[1]: Started sshd@13-10.128.0.26:22-139.178.68.195:37582.service - OpenSSH per-connection server daemon (139.178.68.195:37582). Feb 13 15:40:53.056204 sshd[4139]: Accepted publickey for core from 139.178.68.195 port 37582 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:53.058667 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:53.065763 systemd-logind[1480]: New session 11 of user core. Feb 13 15:40:53.074689 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:40:53.363660 sshd[4141]: Connection closed by 139.178.68.195 port 37582 Feb 13 15:40:53.364787 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:53.370221 systemd[1]: sshd@13-10.128.0.26:22-139.178.68.195:37582.service: Deactivated successfully. Feb 13 15:40:53.373350 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:40:53.376144 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:40:53.378165 systemd-logind[1480]: Removed session 11. Feb 13 15:40:58.421957 systemd[1]: Started sshd@14-10.128.0.26:22-139.178.68.195:60322.service - OpenSSH per-connection server daemon (139.178.68.195:60322). Feb 13 15:40:58.722663 sshd[4156]: Accepted publickey for core from 139.178.68.195 port 60322 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:58.724526 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:58.731817 systemd-logind[1480]: New session 12 of user core. Feb 13 15:40:58.738625 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:40:59.020237 sshd[4158]: Connection closed by 139.178.68.195 port 60322 Feb 13 15:40:59.021500 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:59.026965 systemd[1]: sshd@14-10.128.0.26:22-139.178.68.195:60322.service: Deactivated successfully. Feb 13 15:40:59.029877 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:40:59.031193 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:40:59.032747 systemd-logind[1480]: Removed session 12. Feb 13 15:41:04.081413 systemd[1]: Started sshd@15-10.128.0.26:22-139.178.68.195:60338.service - OpenSSH per-connection server daemon (139.178.68.195:60338). Feb 13 15:41:04.378685 sshd[4171]: Accepted publickey for core from 139.178.68.195 port 60338 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:04.380653 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:04.388741 systemd-logind[1480]: New session 13 of user core. Feb 13 15:41:04.403736 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:41:04.670690 sshd[4173]: Connection closed by 139.178.68.195 port 60338 Feb 13 15:41:04.672052 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:04.677342 systemd[1]: sshd@15-10.128.0.26:22-139.178.68.195:60338.service: Deactivated successfully. Feb 13 15:41:04.680229 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:41:04.681651 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:41:04.683445 systemd-logind[1480]: Removed session 13. Feb 13 15:41:06.447817 systemd[1]: Started sshd@16-10.128.0.26:22-218.92.0.190:29538.service - OpenSSH per-connection server daemon (218.92.0.190:29538). Feb 13 15:41:09.330498 sshd[4186]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:41:09.552084 sshd[4186]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:41:09.729827 systemd[1]: Started sshd@17-10.128.0.26:22-139.178.68.195:46872.service - OpenSSH per-connection server daemon (139.178.68.195:46872). Feb 13 15:41:10.025965 sshd[4191]: Accepted publickey for core from 139.178.68.195 port 46872 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:10.027911 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:10.034166 systemd-logind[1480]: New session 14 of user core. Feb 13 15:41:10.043635 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:41:10.313000 sshd[4193]: Connection closed by 139.178.68.195 port 46872 Feb 13 15:41:10.314141 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:10.318727 systemd[1]: sshd@17-10.128.0.26:22-139.178.68.195:46872.service: Deactivated successfully. Feb 13 15:41:10.321543 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:41:10.323866 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:41:10.325787 systemd-logind[1480]: Removed session 14. Feb 13 15:41:10.371838 systemd[1]: Started sshd@18-10.128.0.26:22-139.178.68.195:46874.service - OpenSSH per-connection server daemon (139.178.68.195:46874). Feb 13 15:41:10.666781 sshd[4205]: Accepted publickey for core from 139.178.68.195 port 46874 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:10.669102 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:10.676517 systemd-logind[1480]: New session 15 of user core. Feb 13 15:41:10.685644 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:41:10.751359 sshd[4186]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:41:10.971683 sshd[4186]: Received disconnect from 218.92.0.190 port 29538:11: [preauth] Feb 13 15:41:10.971683 sshd[4186]: Disconnected from authenticating user root 218.92.0.190 port 29538 [preauth] Feb 13 15:41:10.974352 systemd[1]: sshd@16-10.128.0.26:22-218.92.0.190:29538.service: Deactivated successfully. Feb 13 15:41:11.029780 sshd[4207]: Connection closed by 139.178.68.195 port 46874 Feb 13 15:41:11.031434 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:11.038245 systemd[1]: sshd@18-10.128.0.26:22-139.178.68.195:46874.service: Deactivated successfully. Feb 13 15:41:11.042751 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:41:11.045505 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:41:11.047546 systemd-logind[1480]: Removed session 15. Feb 13 15:41:11.090839 systemd[1]: Started sshd@19-10.128.0.26:22-139.178.68.195:46880.service - OpenSSH per-connection server daemon (139.178.68.195:46880). Feb 13 15:41:11.382287 sshd[4220]: Accepted publickey for core from 139.178.68.195 port 46880 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:11.384739 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:11.391812 systemd-logind[1480]: New session 16 of user core. Feb 13 15:41:11.401685 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:41:11.674171 sshd[4222]: Connection closed by 139.178.68.195 port 46880 Feb 13 15:41:11.675461 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:11.680271 systemd[1]: sshd@19-10.128.0.26:22-139.178.68.195:46880.service: Deactivated successfully. Feb 13 15:41:11.683592 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:41:11.685985 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:41:11.688136 systemd-logind[1480]: Removed session 16. Feb 13 15:41:16.736931 systemd[1]: Started sshd@20-10.128.0.26:22-139.178.68.195:49200.service - OpenSSH per-connection server daemon (139.178.68.195:49200). Feb 13 15:41:17.045482 sshd[4235]: Accepted publickey for core from 139.178.68.195 port 49200 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:17.048352 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:17.055202 systemd-logind[1480]: New session 17 of user core. Feb 13 15:41:17.061655 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:41:17.353873 sshd[4237]: Connection closed by 139.178.68.195 port 49200 Feb 13 15:41:17.354824 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:17.360674 systemd[1]: sshd@20-10.128.0.26:22-139.178.68.195:49200.service: Deactivated successfully. Feb 13 15:41:17.364951 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:41:17.367397 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:41:17.369394 systemd-logind[1480]: Removed session 17. Feb 13 15:41:22.414918 systemd[1]: Started sshd@21-10.128.0.26:22-139.178.68.195:49206.service - OpenSSH per-connection server daemon (139.178.68.195:49206). Feb 13 15:41:22.714339 sshd[4250]: Accepted publickey for core from 139.178.68.195 port 49206 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:22.716474 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:22.723621 systemd-logind[1480]: New session 18 of user core. Feb 13 15:41:22.733663 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:41:23.018199 sshd[4252]: Connection closed by 139.178.68.195 port 49206 Feb 13 15:41:23.019762 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:23.026538 systemd[1]: sshd@21-10.128.0.26:22-139.178.68.195:49206.service: Deactivated successfully. Feb 13 15:41:23.029716 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:41:23.031489 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:41:23.033601 systemd-logind[1480]: Removed session 18. Feb 13 15:41:23.082509 systemd[1]: Started sshd@22-10.128.0.26:22-139.178.68.195:49212.service - OpenSSH per-connection server daemon (139.178.68.195:49212). Feb 13 15:41:23.381991 sshd[4264]: Accepted publickey for core from 139.178.68.195 port 49212 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:23.384123 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:23.390981 systemd-logind[1480]: New session 19 of user core. Feb 13 15:41:23.397642 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:41:23.755422 sshd[4266]: Connection closed by 139.178.68.195 port 49212 Feb 13 15:41:23.756767 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:23.763896 systemd[1]: sshd@22-10.128.0.26:22-139.178.68.195:49212.service: Deactivated successfully. Feb 13 15:41:23.766898 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:41:23.768310 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:41:23.770646 systemd-logind[1480]: Removed session 19. Feb 13 15:41:23.819031 systemd[1]: Started sshd@23-10.128.0.26:22-139.178.68.195:49214.service - OpenSSH per-connection server daemon (139.178.68.195:49214). Feb 13 15:41:24.119221 sshd[4276]: Accepted publickey for core from 139.178.68.195 port 49214 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:24.121665 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:24.128746 systemd-logind[1480]: New session 20 of user core. Feb 13 15:41:24.134820 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:41:25.129641 sshd[4278]: Connection closed by 139.178.68.195 port 49214 Feb 13 15:41:25.130989 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:25.137280 systemd[1]: sshd@23-10.128.0.26:22-139.178.68.195:49214.service: Deactivated successfully. Feb 13 15:41:25.142222 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:41:25.145831 systemd-logind[1480]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:41:25.147846 systemd-logind[1480]: Removed session 20. Feb 13 15:41:25.188013 systemd[1]: Started sshd@24-10.128.0.26:22-139.178.68.195:49220.service - OpenSSH per-connection server daemon (139.178.68.195:49220). Feb 13 15:41:25.493675 sshd[4295]: Accepted publickey for core from 139.178.68.195 port 49220 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:25.495986 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:25.503066 systemd-logind[1480]: New session 21 of user core. Feb 13 15:41:25.510640 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:41:25.960184 sshd[4297]: Connection closed by 139.178.68.195 port 49220 Feb 13 15:41:25.961312 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:25.966760 systemd[1]: sshd@24-10.128.0.26:22-139.178.68.195:49220.service: Deactivated successfully. Feb 13 15:41:25.970616 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:41:25.973860 systemd-logind[1480]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:41:25.975631 systemd-logind[1480]: Removed session 21. Feb 13 15:41:26.024486 systemd[1]: Started sshd@25-10.128.0.26:22-139.178.68.195:49232.service - OpenSSH per-connection server daemon (139.178.68.195:49232). Feb 13 15:41:26.316643 sshd[4309]: Accepted publickey for core from 139.178.68.195 port 49232 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:26.318974 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:26.325660 systemd-logind[1480]: New session 22 of user core. Feb 13 15:41:26.336667 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:41:26.614493 sshd[4311]: Connection closed by 139.178.68.195 port 49232 Feb 13 15:41:26.615632 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:26.621001 systemd[1]: sshd@25-10.128.0.26:22-139.178.68.195:49232.service: Deactivated successfully. Feb 13 15:41:26.624320 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:41:26.627096 systemd-logind[1480]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:41:26.628960 systemd-logind[1480]: Removed session 22. Feb 13 15:41:31.669756 systemd[1]: Started sshd@26-10.128.0.26:22-139.178.68.195:40500.service - OpenSSH per-connection server daemon (139.178.68.195:40500). Feb 13 15:41:31.960788 sshd[4325]: Accepted publickey for core from 139.178.68.195 port 40500 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:31.962766 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:31.970125 systemd-logind[1480]: New session 23 of user core. Feb 13 15:41:31.980646 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:41:32.256564 sshd[4327]: Connection closed by 139.178.68.195 port 40500 Feb 13 15:41:32.258210 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:32.264582 systemd[1]: sshd@26-10.128.0.26:22-139.178.68.195:40500.service: Deactivated successfully. Feb 13 15:41:32.269721 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:41:32.273389 systemd-logind[1480]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:41:32.275174 systemd-logind[1480]: Removed session 23. Feb 13 15:41:37.315876 systemd[1]: Started sshd@27-10.128.0.26:22-139.178.68.195:58390.service - OpenSSH per-connection server daemon (139.178.68.195:58390). Feb 13 15:41:37.613069 sshd[4339]: Accepted publickey for core from 139.178.68.195 port 58390 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:37.614945 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:37.621303 systemd-logind[1480]: New session 24 of user core. Feb 13 15:41:37.625616 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:41:37.902271 sshd[4341]: Connection closed by 139.178.68.195 port 58390 Feb 13 15:41:37.903459 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:37.910696 systemd[1]: sshd@27-10.128.0.26:22-139.178.68.195:58390.service: Deactivated successfully. Feb 13 15:41:37.915113 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:41:37.917930 systemd-logind[1480]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:41:37.919881 systemd-logind[1480]: Removed session 24. Feb 13 15:41:42.959837 systemd[1]: Started sshd@28-10.128.0.26:22-139.178.68.195:58392.service - OpenSSH per-connection server daemon (139.178.68.195:58392). Feb 13 15:41:43.252116 sshd[4354]: Accepted publickey for core from 139.178.68.195 port 58392 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:43.253943 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:43.260067 systemd-logind[1480]: New session 25 of user core. Feb 13 15:41:43.266592 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:41:43.557524 sshd[4356]: Connection closed by 139.178.68.195 port 58392 Feb 13 15:41:43.559077 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:43.566032 systemd[1]: sshd@28-10.128.0.26:22-139.178.68.195:58392.service: Deactivated successfully. Feb 13 15:41:43.569340 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:41:43.570965 systemd-logind[1480]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:41:43.572926 systemd-logind[1480]: Removed session 25. Feb 13 15:41:43.617858 systemd[1]: Started sshd@29-10.128.0.26:22-139.178.68.195:58394.service - OpenSSH per-connection server daemon (139.178.68.195:58394). Feb 13 15:41:43.918652 sshd[4369]: Accepted publickey for core from 139.178.68.195 port 58394 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:43.921125 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:43.932460 systemd-logind[1480]: New session 26 of user core. Feb 13 15:41:43.938677 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:41:46.237686 containerd[1504]: time="2025-02-13T15:41:46.237441979Z" level=info msg="StopContainer for \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\" with timeout 30 (s)" Feb 13 15:41:46.247588 containerd[1504]: time="2025-02-13T15:41:46.246169666Z" level=info msg="Stop container \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\" with signal terminated" Feb 13 15:41:46.281662 systemd[1]: cri-containerd-7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc.scope: Deactivated successfully. Feb 13 15:41:46.324491 containerd[1504]: time="2025-02-13T15:41:46.323262290Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:41:46.339496 containerd[1504]: time="2025-02-13T15:41:46.339320887Z" level=info msg="StopContainer for \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\" with timeout 2 (s)" Feb 13 15:41:46.340109 containerd[1504]: time="2025-02-13T15:41:46.340037775Z" level=info msg="Stop container \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\" with signal terminated" Feb 13 15:41:46.351855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc-rootfs.mount: Deactivated successfully. Feb 13 15:41:46.358211 systemd-networkd[1397]: lxc_health: Link DOWN Feb 13 15:41:46.358226 systemd-networkd[1397]: lxc_health: Lost carrier Feb 13 15:41:46.382990 containerd[1504]: time="2025-02-13T15:41:46.382000285Z" level=info msg="shim disconnected" id=7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc namespace=k8s.io Feb 13 15:41:46.382990 containerd[1504]: time="2025-02-13T15:41:46.382603003Z" level=warning msg="cleaning up after shim disconnected" id=7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc namespace=k8s.io Feb 13 15:41:46.382990 containerd[1504]: time="2025-02-13T15:41:46.382636766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:46.396364 systemd[1]: cri-containerd-29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5.scope: Deactivated successfully. Feb 13 15:41:46.397459 systemd[1]: cri-containerd-29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5.scope: Consumed 10.426s CPU time, 124.6M memory peak, 136K read from disk, 13.3M written to disk. Feb 13 15:41:46.420613 containerd[1504]: time="2025-02-13T15:41:46.419604327Z" level=info msg="StopContainer for \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\" returns successfully" Feb 13 15:41:46.421453 containerd[1504]: time="2025-02-13T15:41:46.421190290Z" level=info msg="StopPodSandbox for \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\"" Feb 13 15:41:46.421453 containerd[1504]: time="2025-02-13T15:41:46.421275020Z" level=info msg="Container to stop \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:46.428185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177-shm.mount: Deactivated successfully. Feb 13 15:41:46.443627 systemd[1]: cri-containerd-9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177.scope: Deactivated successfully. Feb 13 15:41:46.455064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5-rootfs.mount: Deactivated successfully. Feb 13 15:41:46.459143 containerd[1504]: time="2025-02-13T15:41:46.459065963Z" level=info msg="shim disconnected" id=29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5 namespace=k8s.io Feb 13 15:41:46.459784 containerd[1504]: time="2025-02-13T15:41:46.459605910Z" level=warning msg="cleaning up after shim disconnected" id=29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5 namespace=k8s.io Feb 13 15:41:46.459784 containerd[1504]: time="2025-02-13T15:41:46.459635247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:46.499460 containerd[1504]: time="2025-02-13T15:41:46.498930980Z" level=info msg="StopContainer for \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\" returns successfully" Feb 13 15:41:46.500497 containerd[1504]: time="2025-02-13T15:41:46.500279554Z" level=info msg="StopPodSandbox for \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\"" Feb 13 15:41:46.502115 containerd[1504]: time="2025-02-13T15:41:46.500345155Z" level=info msg="Container to stop \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:46.502115 containerd[1504]: time="2025-02-13T15:41:46.501146892Z" level=info msg="Container to stop \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:46.502115 containerd[1504]: time="2025-02-13T15:41:46.501202418Z" level=info msg="Container to stop \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:46.502115 containerd[1504]: time="2025-02-13T15:41:46.501248290Z" level=info msg="Container to stop \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:46.502115 containerd[1504]: time="2025-02-13T15:41:46.501266212Z" level=info msg="Container to stop \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:46.507571 containerd[1504]: time="2025-02-13T15:41:46.507139920Z" level=info msg="shim disconnected" id=9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177 namespace=k8s.io Feb 13 15:41:46.507571 containerd[1504]: time="2025-02-13T15:41:46.507246960Z" level=warning msg="cleaning up after shim disconnected" id=9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177 namespace=k8s.io Feb 13 15:41:46.507571 containerd[1504]: time="2025-02-13T15:41:46.507264527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:46.516439 systemd[1]: cri-containerd-2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae.scope: Deactivated successfully. Feb 13 15:41:46.548058 containerd[1504]: time="2025-02-13T15:41:46.547358092Z" level=info msg="TearDown network for sandbox \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\" successfully" Feb 13 15:41:46.548058 containerd[1504]: time="2025-02-13T15:41:46.547464466Z" level=info msg="StopPodSandbox for \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\" returns successfully" Feb 13 15:41:46.564148 containerd[1504]: time="2025-02-13T15:41:46.563066562Z" level=info msg="shim disconnected" id=2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae namespace=k8s.io Feb 13 15:41:46.564148 containerd[1504]: time="2025-02-13T15:41:46.563148255Z" level=warning msg="cleaning up after shim disconnected" id=2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae namespace=k8s.io Feb 13 15:41:46.564148 containerd[1504]: time="2025-02-13T15:41:46.563167085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:46.600676 containerd[1504]: time="2025-02-13T15:41:46.600303312Z" level=info msg="TearDown network for sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" successfully" Feb 13 15:41:46.600676 containerd[1504]: time="2025-02-13T15:41:46.600480184Z" level=info msg="StopPodSandbox for \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" returns successfully" Feb 13 15:41:46.672657 kubelet[2680]: I0213 15:41:46.672574 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4716e82-7dfc-4609-ba52-24c5467a7bdb-cilium-config-path\") pod \"d4716e82-7dfc-4609-ba52-24c5467a7bdb\" (UID: \"d4716e82-7dfc-4609-ba52-24c5467a7bdb\") " Feb 13 15:41:46.674311 kubelet[2680]: I0213 15:41:46.673492 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krgrm\" (UniqueName: \"kubernetes.io/projected/d4716e82-7dfc-4609-ba52-24c5467a7bdb-kube-api-access-krgrm\") pod \"d4716e82-7dfc-4609-ba52-24c5467a7bdb\" (UID: \"d4716e82-7dfc-4609-ba52-24c5467a7bdb\") " Feb 13 15:41:46.677435 kubelet[2680]: I0213 15:41:46.677315 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4716e82-7dfc-4609-ba52-24c5467a7bdb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4716e82-7dfc-4609-ba52-24c5467a7bdb" (UID: "d4716e82-7dfc-4609-ba52-24c5467a7bdb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 15:41:46.679270 kubelet[2680]: I0213 15:41:46.679217 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4716e82-7dfc-4609-ba52-24c5467a7bdb-kube-api-access-krgrm" (OuterVolumeSpecName: "kube-api-access-krgrm") pod "d4716e82-7dfc-4609-ba52-24c5467a7bdb" (UID: "d4716e82-7dfc-4609-ba52-24c5467a7bdb"). InnerVolumeSpecName "kube-api-access-krgrm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:41:46.777605 kubelet[2680]: I0213 15:41:46.774879 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-config-path\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.777605 kubelet[2680]: I0213 15:41:46.774977 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-hostproc\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.777605 kubelet[2680]: I0213 15:41:46.775013 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-run\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.777605 kubelet[2680]: I0213 15:41:46.775038 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-xtables-lock\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.777605 kubelet[2680]: I0213 15:41:46.775069 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6brcz\" (UniqueName: \"kubernetes.io/projected/b610ad1e-8f4c-449f-beb7-c5b587e58f09-kube-api-access-6brcz\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.777605 kubelet[2680]: I0213 15:41:46.775098 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-etc-cni-netd\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778101 kubelet[2680]: I0213 15:41:46.775128 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-cgroup\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778101 kubelet[2680]: I0213 15:41:46.775157 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b610ad1e-8f4c-449f-beb7-c5b587e58f09-hubble-tls\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778101 kubelet[2680]: I0213 15:41:46.775186 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-host-proc-sys-net\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778101 kubelet[2680]: I0213 15:41:46.775210 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-host-proc-sys-kernel\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778101 kubelet[2680]: I0213 15:41:46.775242 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-bpf-maps\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778101 kubelet[2680]: I0213 15:41:46.775269 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cni-path\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778459 kubelet[2680]: I0213 15:41:46.775292 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-lib-modules\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778459 kubelet[2680]: I0213 15:41:46.775327 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b610ad1e-8f4c-449f-beb7-c5b587e58f09-clustermesh-secrets\") pod \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\" (UID: \"b610ad1e-8f4c-449f-beb7-c5b587e58f09\") " Feb 13 15:41:46.778459 kubelet[2680]: I0213 15:41:46.775431 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4716e82-7dfc-4609-ba52-24c5467a7bdb-cilium-config-path\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.778459 kubelet[2680]: I0213 15:41:46.775453 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-krgrm\" (UniqueName: \"kubernetes.io/projected/d4716e82-7dfc-4609-ba52-24c5467a7bdb-kube-api-access-krgrm\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.780676 kubelet[2680]: I0213 15:41:46.780615 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b610ad1e-8f4c-449f-beb7-c5b587e58f09-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 15:41:46.780860 kubelet[2680]: I0213 15:41:46.780739 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-hostproc" (OuterVolumeSpecName: "hostproc") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.780860 kubelet[2680]: I0213 15:41:46.780784 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.780860 kubelet[2680]: I0213 15:41:46.780814 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.781129 kubelet[2680]: I0213 15:41:46.781087 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.781304 kubelet[2680]: I0213 15:41:46.781264 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.781540 kubelet[2680]: I0213 15:41:46.781516 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.782658 kubelet[2680]: I0213 15:41:46.782616 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.782773 kubelet[2680]: I0213 15:41:46.782673 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.783986 kubelet[2680]: I0213 15:41:46.783938 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 15:41:46.784116 kubelet[2680]: I0213 15:41:46.784043 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cni-path" (OuterVolumeSpecName: "cni-path") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.784116 kubelet[2680]: I0213 15:41:46.784085 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:41:46.787559 kubelet[2680]: I0213 15:41:46.787521 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b610ad1e-8f4c-449f-beb7-c5b587e58f09-kube-api-access-6brcz" (OuterVolumeSpecName: "kube-api-access-6brcz") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "kube-api-access-6brcz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:41:46.788114 kubelet[2680]: I0213 15:41:46.788079 2680 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b610ad1e-8f4c-449f-beb7-c5b587e58f09-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b610ad1e-8f4c-449f-beb7-c5b587e58f09" (UID: "b610ad1e-8f4c-449f-beb7-c5b587e58f09"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:41:46.835552 kubelet[2680]: I0213 15:41:46.835505 2680 scope.go:117] "RemoveContainer" containerID="29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5" Feb 13 15:41:46.838756 containerd[1504]: time="2025-02-13T15:41:46.838552607Z" level=info msg="RemoveContainer for \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\"" Feb 13 15:41:46.848311 containerd[1504]: time="2025-02-13T15:41:46.848068698Z" level=info msg="RemoveContainer for \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\" returns successfully" Feb 13 15:41:46.848547 kubelet[2680]: I0213 15:41:46.848497 2680 scope.go:117] "RemoveContainer" containerID="d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5" Feb 13 15:41:46.856304 containerd[1504]: time="2025-02-13T15:41:46.855403214Z" level=info msg="RemoveContainer for \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\"" Feb 13 15:41:46.857344 systemd[1]: Removed slice kubepods-burstable-podb610ad1e_8f4c_449f_beb7_c5b587e58f09.slice - libcontainer container kubepods-burstable-podb610ad1e_8f4c_449f_beb7_c5b587e58f09.slice. Feb 13 15:41:46.857576 systemd[1]: kubepods-burstable-podb610ad1e_8f4c_449f_beb7_c5b587e58f09.slice: Consumed 10.565s CPU time, 125M memory peak, 136K read from disk, 13.3M written to disk. Feb 13 15:41:46.860365 systemd[1]: Removed slice kubepods-besteffort-podd4716e82_7dfc_4609_ba52_24c5467a7bdb.slice - libcontainer container kubepods-besteffort-podd4716e82_7dfc_4609_ba52_24c5467a7bdb.slice. Feb 13 15:41:46.862149 containerd[1504]: time="2025-02-13T15:41:46.861990710Z" level=info msg="RemoveContainer for \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\" returns successfully" Feb 13 15:41:46.862711 kubelet[2680]: I0213 15:41:46.862663 2680 scope.go:117] "RemoveContainer" containerID="278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036" Feb 13 15:41:46.866028 containerd[1504]: time="2025-02-13T15:41:46.865514629Z" level=info msg="RemoveContainer for \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\"" Feb 13 15:41:46.870923 containerd[1504]: time="2025-02-13T15:41:46.870868582Z" level=info msg="RemoveContainer for \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\" returns successfully" Feb 13 15:41:46.872602 kubelet[2680]: I0213 15:41:46.872558 2680 scope.go:117] "RemoveContainer" containerID="bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f" Feb 13 15:41:46.875644 containerd[1504]: time="2025-02-13T15:41:46.875125108Z" level=info msg="RemoveContainer for \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\"" Feb 13 15:41:46.876511 kubelet[2680]: I0213 15:41:46.876445 2680 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-hostproc\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.876511 kubelet[2680]: I0213 15:41:46.876504 2680 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-etc-cni-netd\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.876737 kubelet[2680]: I0213 15:41:46.876532 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-run\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.876737 kubelet[2680]: I0213 15:41:46.876548 2680 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-xtables-lock\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.876737 kubelet[2680]: I0213 15:41:46.876572 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6brcz\" (UniqueName: \"kubernetes.io/projected/b610ad1e-8f4c-449f-beb7-c5b587e58f09-kube-api-access-6brcz\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.876737 kubelet[2680]: I0213 15:41:46.876588 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-cgroup\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.876737 kubelet[2680]: I0213 15:41:46.876616 2680 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-host-proc-sys-net\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.876737 kubelet[2680]: I0213 15:41:46.876633 2680 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-host-proc-sys-kernel\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.876737 kubelet[2680]: I0213 15:41:46.876656 2680 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b610ad1e-8f4c-449f-beb7-c5b587e58f09-hubble-tls\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.877238 kubelet[2680]: I0213 15:41:46.876673 2680 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cni-path\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.877238 kubelet[2680]: I0213 15:41:46.876688 2680 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-lib-modules\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.877238 kubelet[2680]: I0213 15:41:46.876706 2680 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b610ad1e-8f4c-449f-beb7-c5b587e58f09-bpf-maps\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.877238 kubelet[2680]: I0213 15:41:46.876734 2680 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b610ad1e-8f4c-449f-beb7-c5b587e58f09-clustermesh-secrets\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.877238 kubelet[2680]: I0213 15:41:46.876753 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b610ad1e-8f4c-449f-beb7-c5b587e58f09-cilium-config-path\") on node \"ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:46.881076 containerd[1504]: time="2025-02-13T15:41:46.881002352Z" level=info msg="RemoveContainer for \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\" returns successfully" Feb 13 15:41:46.885042 kubelet[2680]: I0213 15:41:46.882453 2680 scope.go:117] "RemoveContainer" containerID="35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60" Feb 13 15:41:46.892942 containerd[1504]: time="2025-02-13T15:41:46.892233987Z" level=info msg="RemoveContainer for \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\"" Feb 13 15:41:46.903719 containerd[1504]: time="2025-02-13T15:41:46.902744342Z" level=info msg="RemoveContainer for \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\" returns successfully" Feb 13 15:41:46.905421 kubelet[2680]: I0213 15:41:46.904264 2680 scope.go:117] "RemoveContainer" containerID="29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5" Feb 13 15:41:46.905421 kubelet[2680]: E0213 15:41:46.905144 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\": not found" containerID="29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5" Feb 13 15:41:46.905421 kubelet[2680]: I0213 15:41:46.905208 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5"} err="failed to get container status \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\": not found" Feb 13 15:41:46.905421 kubelet[2680]: I0213 15:41:46.905341 2680 scope.go:117] "RemoveContainer" containerID="d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5" Feb 13 15:41:46.905775 containerd[1504]: time="2025-02-13T15:41:46.904854799Z" level=error msg="ContainerStatus for \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29c9c2fa9d33a3d2eac2076301925e7cd3f1519b60806183e32ddc03c34280f5\": not found" Feb 13 15:41:46.907012 containerd[1504]: time="2025-02-13T15:41:46.906718694Z" level=error msg="ContainerStatus for \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\": not found" Feb 13 15:41:46.907152 kubelet[2680]: E0213 15:41:46.906893 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\": not found" containerID="d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5" Feb 13 15:41:46.907152 kubelet[2680]: I0213 15:41:46.906940 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5"} err="failed to get container status \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d30beb3f03f97407935ce9557633df550dfd38b4dd1a2b6719e3241be31d58a5\": not found" Feb 13 15:41:46.907152 kubelet[2680]: I0213 15:41:46.906972 2680 scope.go:117] "RemoveContainer" containerID="278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036" Feb 13 15:41:46.908983 containerd[1504]: time="2025-02-13T15:41:46.908542168Z" level=error msg="ContainerStatus for \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\": not found" Feb 13 15:41:46.909121 kubelet[2680]: E0213 15:41:46.908838 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\": not found" containerID="278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036" Feb 13 15:41:46.909121 kubelet[2680]: I0213 15:41:46.908880 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036"} err="failed to get container status \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\": rpc error: code = NotFound desc = an error occurred when try to find container \"278068daec56d1fcf982778ef717204ffe2b493baca75f07e6baab5133fac036\": not found" Feb 13 15:41:46.909121 kubelet[2680]: I0213 15:41:46.908913 2680 scope.go:117] "RemoveContainer" containerID="bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f" Feb 13 15:41:46.911496 containerd[1504]: time="2025-02-13T15:41:46.911445196Z" level=error msg="ContainerStatus for \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\": not found" Feb 13 15:41:46.911687 kubelet[2680]: E0213 15:41:46.911664 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\": not found" containerID="bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f" Feb 13 15:41:46.911791 kubelet[2680]: I0213 15:41:46.911705 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f"} err="failed to get container status \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb6bf89a43d83cd780ef3a0b83c923386f58d5874c70d46e079cbe613ec2d42f\": not found" Feb 13 15:41:46.911791 kubelet[2680]: I0213 15:41:46.911738 2680 scope.go:117] "RemoveContainer" containerID="35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60" Feb 13 15:41:46.913446 containerd[1504]: time="2025-02-13T15:41:46.912539277Z" level=error msg="ContainerStatus for \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\": not found" Feb 13 15:41:46.913639 kubelet[2680]: E0213 15:41:46.913253 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\": not found" containerID="35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60" Feb 13 15:41:46.913639 kubelet[2680]: I0213 15:41:46.913291 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60"} err="failed to get container status \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\": rpc error: code = NotFound desc = an error occurred when try to find container \"35631b8e0ea9bae1421411229b5fe76d778a7c59e12314b9147441c8decbcd60\": not found" Feb 13 15:41:46.913639 kubelet[2680]: I0213 15:41:46.913319 2680 scope.go:117] "RemoveContainer" containerID="7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc" Feb 13 15:41:46.919476 containerd[1504]: time="2025-02-13T15:41:46.918289229Z" level=info msg="RemoveContainer for \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\"" Feb 13 15:41:46.924560 containerd[1504]: time="2025-02-13T15:41:46.924488355Z" level=info msg="RemoveContainer for \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\" returns successfully" Feb 13 15:41:46.925278 kubelet[2680]: I0213 15:41:46.925234 2680 scope.go:117] "RemoveContainer" containerID="7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc" Feb 13 15:41:46.926042 containerd[1504]: time="2025-02-13T15:41:46.925907917Z" level=error msg="ContainerStatus for \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\": not found" Feb 13 15:41:46.926284 kubelet[2680]: E0213 15:41:46.926241 2680 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\": not found" containerID="7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc" Feb 13 15:41:46.926451 kubelet[2680]: I0213 15:41:46.926297 2680 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc"} err="failed to get container status \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e8b19e0018273987ce69a173c4ec0a1f2bdd27b174c1a9e4ae0ff71ac8243fc\": not found" Feb 13 15:41:47.289152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177-rootfs.mount: Deactivated successfully. Feb 13 15:41:47.289343 systemd[1]: var-lib-kubelet-pods-d4716e82\x2d7dfc\x2d4609\x2dba52\x2d24c5467a7bdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkrgrm.mount: Deactivated successfully. Feb 13 15:41:47.289498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae-rootfs.mount: Deactivated successfully. Feb 13 15:41:47.289611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae-shm.mount: Deactivated successfully. Feb 13 15:41:47.290150 systemd[1]: var-lib-kubelet-pods-b610ad1e\x2d8f4c\x2d449f\x2dbeb7\x2dc5b587e58f09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6brcz.mount: Deactivated successfully. Feb 13 15:41:47.290312 systemd[1]: var-lib-kubelet-pods-b610ad1e\x2d8f4c\x2d449f\x2dbeb7\x2dc5b587e58f09-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:41:47.290469 systemd[1]: var-lib-kubelet-pods-b610ad1e\x2d8f4c\x2d449f\x2dbeb7\x2dc5b587e58f09-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:41:47.375332 containerd[1504]: time="2025-02-13T15:41:47.375163841Z" level=info msg="StopPodSandbox for \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\"" Feb 13 15:41:47.376062 containerd[1504]: time="2025-02-13T15:41:47.375334668Z" level=info msg="TearDown network for sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" successfully" Feb 13 15:41:47.376062 containerd[1504]: time="2025-02-13T15:41:47.375427626Z" level=info msg="StopPodSandbox for \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" returns successfully" Feb 13 15:41:47.376062 containerd[1504]: time="2025-02-13T15:41:47.375995471Z" level=info msg="RemovePodSandbox for \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\"" Feb 13 15:41:47.376062 containerd[1504]: time="2025-02-13T15:41:47.376037808Z" level=info msg="Forcibly stopping sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\"" Feb 13 15:41:47.376295 containerd[1504]: time="2025-02-13T15:41:47.376121847Z" level=info msg="TearDown network for sandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" successfully" Feb 13 15:41:47.381871 containerd[1504]: time="2025-02-13T15:41:47.381798118Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.381871 containerd[1504]: time="2025-02-13T15:41:47.381879344Z" level=info msg="RemovePodSandbox \"2156710ee84f847e14ea56b8dd9eee9b9cf63401b1d2467b157cb956295ebfae\" returns successfully" Feb 13 15:41:47.382757 containerd[1504]: time="2025-02-13T15:41:47.382589668Z" level=info msg="StopPodSandbox for \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\"" Feb 13 15:41:47.382757 containerd[1504]: time="2025-02-13T15:41:47.382704139Z" level=info msg="TearDown network for sandbox \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\" successfully" Feb 13 15:41:47.382757 containerd[1504]: time="2025-02-13T15:41:47.382725442Z" level=info msg="StopPodSandbox for \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\" returns successfully" Feb 13 15:41:47.383238 containerd[1504]: time="2025-02-13T15:41:47.383192930Z" level=info msg="RemovePodSandbox for \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\"" Feb 13 15:41:47.383238 containerd[1504]: time="2025-02-13T15:41:47.383226320Z" level=info msg="Forcibly stopping sandbox \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\"" Feb 13 15:41:47.383461 containerd[1504]: time="2025-02-13T15:41:47.383305594Z" level=info msg="TearDown network for sandbox \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\" successfully" Feb 13 15:41:47.388887 containerd[1504]: time="2025-02-13T15:41:47.388783477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:47.388887 containerd[1504]: time="2025-02-13T15:41:47.388869644Z" level=info msg="RemovePodSandbox \"9a5261242c3d694c42b495eeccd3fe1509a5126803052d151b498a195464b177\" returns successfully" Feb 13 15:41:47.438507 kubelet[2680]: I0213 15:41:47.438434 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b610ad1e-8f4c-449f-beb7-c5b587e58f09" path="/var/lib/kubelet/pods/b610ad1e-8f4c-449f-beb7-c5b587e58f09/volumes" Feb 13 15:41:47.439668 kubelet[2680]: I0213 15:41:47.439630 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4716e82-7dfc-4609-ba52-24c5467a7bdb" path="/var/lib/kubelet/pods/d4716e82-7dfc-4609-ba52-24c5467a7bdb/volumes" Feb 13 15:41:47.593016 kubelet[2680]: E0213 15:41:47.592914 2680 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:41:48.203424 sshd[4372]: Connection closed by 139.178.68.195 port 58394 Feb 13 15:41:48.203770 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:48.213318 systemd[1]: sshd@29-10.128.0.26:22-139.178.68.195:58394.service: Deactivated successfully. Feb 13 15:41:48.215190 systemd-logind[1480]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:41:48.220486 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:41:48.221105 systemd[1]: session-26.scope: Consumed 1.537s CPU time, 23.9M memory peak. Feb 13 15:41:48.226136 systemd-logind[1480]: Removed session 26. Feb 13 15:41:48.262933 systemd[1]: Started sshd@30-10.128.0.26:22-139.178.68.195:42552.service - OpenSSH per-connection server daemon (139.178.68.195:42552). Feb 13 15:41:48.573238 sshd[4530]: Accepted publickey for core from 139.178.68.195 port 42552 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:48.575245 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:48.582193 systemd-logind[1480]: New session 27 of user core. Feb 13 15:41:48.588700 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:41:48.673332 ntpd[1469]: Deleting interface #12 lxc_health, fe80::bc37:34ff:fe19:47c8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Feb 13 15:41:48.673881 ntpd[1469]: 13 Feb 15:41:48 ntpd[1469]: Deleting interface #12 lxc_health, fe80::bc37:34ff:fe19:47c8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Feb 13 15:41:49.636406 kubelet[2680]: I0213 15:41:49.634236 2680 memory_manager.go:355] "RemoveStaleState removing state" podUID="b610ad1e-8f4c-449f-beb7-c5b587e58f09" containerName="cilium-agent" Feb 13 15:41:49.636406 kubelet[2680]: I0213 15:41:49.634278 2680 memory_manager.go:355] "RemoveStaleState removing state" podUID="d4716e82-7dfc-4609-ba52-24c5467a7bdb" containerName="cilium-operator" Feb 13 15:41:49.650386 sshd[4533]: Connection closed by 139.178.68.195 port 42552 Feb 13 15:41:49.655067 sshd-session[4530]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:49.656944 systemd[1]: Created slice kubepods-burstable-pode538877a_1456_4238_aa2c_0462fe669c64.slice - libcontainer container kubepods-burstable-pode538877a_1456_4238_aa2c_0462fe669c64.slice. Feb 13 15:41:49.673121 systemd[1]: sshd@30-10.128.0.26:22-139.178.68.195:42552.service: Deactivated successfully. Feb 13 15:41:49.680657 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:41:49.689614 systemd-logind[1480]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:41:49.692392 systemd-logind[1480]: Removed session 27. Feb 13 15:41:49.728845 systemd[1]: Started sshd@31-10.128.0.26:22-139.178.68.195:42560.service - OpenSSH per-connection server daemon (139.178.68.195:42560). Feb 13 15:41:49.798141 kubelet[2680]: I0213 15:41:49.798034 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e538877a-1456-4238-aa2c-0462fe669c64-hubble-tls\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798141 kubelet[2680]: I0213 15:41:49.798113 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm7bl\" (UniqueName: \"kubernetes.io/projected/e538877a-1456-4238-aa2c-0462fe669c64-kube-api-access-rm7bl\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798141 kubelet[2680]: I0213 15:41:49.798145 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-cni-path\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798721 kubelet[2680]: I0213 15:41:49.798171 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-host-proc-sys-net\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798721 kubelet[2680]: I0213 15:41:49.798197 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-etc-cni-netd\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798721 kubelet[2680]: I0213 15:41:49.798219 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-lib-modules\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798721 kubelet[2680]: I0213 15:41:49.798241 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e538877a-1456-4238-aa2c-0462fe669c64-clustermesh-secrets\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798721 kubelet[2680]: I0213 15:41:49.798276 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e538877a-1456-4238-aa2c-0462fe669c64-cilium-ipsec-secrets\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798891 kubelet[2680]: I0213 15:41:49.798302 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-host-proc-sys-kernel\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798891 kubelet[2680]: I0213 15:41:49.798336 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-hostproc\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798891 kubelet[2680]: I0213 15:41:49.798362 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e538877a-1456-4238-aa2c-0462fe669c64-cilium-config-path\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798891 kubelet[2680]: I0213 15:41:49.798432 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-cilium-run\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798891 kubelet[2680]: I0213 15:41:49.798472 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-bpf-maps\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.798891 kubelet[2680]: I0213 15:41:49.798503 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-cilium-cgroup\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.799088 kubelet[2680]: I0213 15:41:49.798534 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e538877a-1456-4238-aa2c-0462fe669c64-xtables-lock\") pod \"cilium-8n8q9\" (UID: \"e538877a-1456-4238-aa2c-0462fe669c64\") " pod="kube-system/cilium-8n8q9" Feb 13 15:41:49.952424 kubelet[2680]: I0213 15:41:49.949265 2680 setters.go:602] "Node became not ready" node="ci-4230-0-1-2280b10fc5cb4b2a0b2a.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:41:49Z","lastTransitionTime":"2025-02-13T15:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:41:49.989798 containerd[1504]: time="2025-02-13T15:41:49.989740769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8n8q9,Uid:e538877a-1456-4238-aa2c-0462fe669c64,Namespace:kube-system,Attempt:0,}" Feb 13 15:41:50.032199 containerd[1504]: time="2025-02-13T15:41:50.031708465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:50.032199 containerd[1504]: time="2025-02-13T15:41:50.031813533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:50.032199 containerd[1504]: time="2025-02-13T15:41:50.031840376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:50.032199 containerd[1504]: time="2025-02-13T15:41:50.031979280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:50.062530 systemd[1]: Started cri-containerd-e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a.scope - libcontainer container e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a. Feb 13 15:41:50.069149 sshd[4544]: Accepted publickey for core from 139.178.68.195 port 42560 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:50.072195 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:50.081806 systemd-logind[1480]: New session 28 of user core. Feb 13 15:41:50.088637 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:41:50.116423 containerd[1504]: time="2025-02-13T15:41:50.116216765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8n8q9,Uid:e538877a-1456-4238-aa2c-0462fe669c64,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\"" Feb 13 15:41:50.121112 containerd[1504]: time="2025-02-13T15:41:50.121053436Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:41:50.138124 containerd[1504]: time="2025-02-13T15:41:50.137914282Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"842a60ac277a0251ae63e5a514e3b47fe9afc8f5f4367526f185c1d3da0df306\"" Feb 13 15:41:50.139861 containerd[1504]: time="2025-02-13T15:41:50.139759101Z" level=info msg="StartContainer for \"842a60ac277a0251ae63e5a514e3b47fe9afc8f5f4367526f185c1d3da0df306\"" Feb 13 15:41:50.182773 systemd[1]: Started cri-containerd-842a60ac277a0251ae63e5a514e3b47fe9afc8f5f4367526f185c1d3da0df306.scope - libcontainer container 842a60ac277a0251ae63e5a514e3b47fe9afc8f5f4367526f185c1d3da0df306. Feb 13 15:41:50.222115 containerd[1504]: time="2025-02-13T15:41:50.221854821Z" level=info msg="StartContainer for \"842a60ac277a0251ae63e5a514e3b47fe9afc8f5f4367526f185c1d3da0df306\" returns successfully" Feb 13 15:41:50.234649 systemd[1]: cri-containerd-842a60ac277a0251ae63e5a514e3b47fe9afc8f5f4367526f185c1d3da0df306.scope: Deactivated successfully. Feb 13 15:41:50.282366 containerd[1504]: time="2025-02-13T15:41:50.282203512Z" level=info msg="shim disconnected" id=842a60ac277a0251ae63e5a514e3b47fe9afc8f5f4367526f185c1d3da0df306 namespace=k8s.io Feb 13 15:41:50.282366 containerd[1504]: time="2025-02-13T15:41:50.282286744Z" level=warning msg="cleaning up after shim disconnected" id=842a60ac277a0251ae63e5a514e3b47fe9afc8f5f4367526f185c1d3da0df306 namespace=k8s.io Feb 13 15:41:50.282366 containerd[1504]: time="2025-02-13T15:41:50.282302572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:50.290182 sshd[4584]: Connection closed by 139.178.68.195 port 42560 Feb 13 15:41:50.293727 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:50.300922 systemd[1]: sshd@31-10.128.0.26:22-139.178.68.195:42560.service: Deactivated successfully. Feb 13 15:41:50.306773 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:41:50.310973 systemd-logind[1480]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:41:50.313126 systemd-logind[1480]: Removed session 28. Feb 13 15:41:50.351344 systemd[1]: Started sshd@32-10.128.0.26:22-139.178.68.195:42568.service - OpenSSH per-connection server daemon (139.178.68.195:42568). Feb 13 15:41:50.433861 kubelet[2680]: E0213 15:41:50.433511 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-wrxm9" podUID="e253d7d5-901b-47ec-9c74-0cd5ed7324d8" Feb 13 15:41:50.640017 sshd[4659]: Accepted publickey for core from 139.178.68.195 port 42568 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:50.642239 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:50.648771 systemd-logind[1480]: New session 29 of user core. Feb 13 15:41:50.657657 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:41:50.858832 containerd[1504]: time="2025-02-13T15:41:50.858723148Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:41:50.887665 containerd[1504]: time="2025-02-13T15:41:50.886644478Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595\"" Feb 13 15:41:50.887665 containerd[1504]: time="2025-02-13T15:41:50.887607610Z" level=info msg="StartContainer for \"027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595\"" Feb 13 15:41:50.989252 systemd[1]: Started cri-containerd-027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595.scope - libcontainer container 027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595. Feb 13 15:41:51.083429 containerd[1504]: time="2025-02-13T15:41:51.083342682Z" level=info msg="StartContainer for \"027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595\" returns successfully" Feb 13 15:41:51.114914 systemd[1]: cri-containerd-027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595.scope: Deactivated successfully. Feb 13 15:41:51.150061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595-rootfs.mount: Deactivated successfully. Feb 13 15:41:51.157315 containerd[1504]: time="2025-02-13T15:41:51.157228878Z" level=info msg="shim disconnected" id=027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595 namespace=k8s.io Feb 13 15:41:51.157315 containerd[1504]: time="2025-02-13T15:41:51.157312346Z" level=warning msg="cleaning up after shim disconnected" id=027cdfb16661263a24c7638ef59bbb5e68cee1045e1230527d48a5aed823c595 namespace=k8s.io Feb 13 15:41:51.157754 containerd[1504]: time="2025-02-13T15:41:51.157327828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:51.866498 containerd[1504]: time="2025-02-13T15:41:51.866421532Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:41:51.905642 containerd[1504]: time="2025-02-13T15:41:51.905526563Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384\"" Feb 13 15:41:51.910900 containerd[1504]: time="2025-02-13T15:41:51.910815016Z" level=info msg="StartContainer for \"ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384\"" Feb 13 15:41:51.918067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037240811.mount: Deactivated successfully. Feb 13 15:41:51.978720 systemd[1]: Started cri-containerd-ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384.scope - libcontainer container ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384. Feb 13 15:41:52.031448 containerd[1504]: time="2025-02-13T15:41:52.031361124Z" level=info msg="StartContainer for \"ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384\" returns successfully" Feb 13 15:41:52.037656 systemd[1]: cri-containerd-ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384.scope: Deactivated successfully. Feb 13 15:41:52.081536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384-rootfs.mount: Deactivated successfully. Feb 13 15:41:52.083075 containerd[1504]: time="2025-02-13T15:41:52.082040540Z" level=info msg="shim disconnected" id=ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384 namespace=k8s.io Feb 13 15:41:52.083075 containerd[1504]: time="2025-02-13T15:41:52.082308322Z" level=warning msg="cleaning up after shim disconnected" id=ee939bc8301a5decc8e593fa2b27621e7daae99576a8c382ab6a3fca1abb5384 namespace=k8s.io Feb 13 15:41:52.083075 containerd[1504]: time="2025-02-13T15:41:52.082329912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:52.434357 kubelet[2680]: E0213 15:41:52.434275 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-wrxm9" podUID="e253d7d5-901b-47ec-9c74-0cd5ed7324d8" Feb 13 15:41:52.594834 kubelet[2680]: E0213 15:41:52.594753 2680 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:41:52.870389 containerd[1504]: time="2025-02-13T15:41:52.870309533Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:41:52.893521 containerd[1504]: time="2025-02-13T15:41:52.893466128Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2\"" Feb 13 15:41:52.894427 containerd[1504]: time="2025-02-13T15:41:52.894163392Z" level=info msg="StartContainer for \"9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2\"" Feb 13 15:41:52.953634 systemd[1]: Started cri-containerd-9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2.scope - libcontainer container 9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2. Feb 13 15:41:52.989496 systemd[1]: cri-containerd-9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2.scope: Deactivated successfully. Feb 13 15:41:52.991855 containerd[1504]: time="2025-02-13T15:41:52.991783708Z" level=info msg="StartContainer for \"9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2\" returns successfully" Feb 13 15:41:53.022208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2-rootfs.mount: Deactivated successfully. Feb 13 15:41:53.026248 containerd[1504]: time="2025-02-13T15:41:53.026123889Z" level=info msg="shim disconnected" id=9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2 namespace=k8s.io Feb 13 15:41:53.026248 containerd[1504]: time="2025-02-13T15:41:53.026207244Z" level=warning msg="cleaning up after shim disconnected" id=9609f6d5eb11fedccf1537e81000f6d780c66fe5d21acd0c8f6499add00b13a2 namespace=k8s.io Feb 13 15:41:53.026248 containerd[1504]: time="2025-02-13T15:41:53.026222262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:53.067327 containerd[1504]: time="2025-02-13T15:41:53.067236600Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:41:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:41:53.874836 containerd[1504]: time="2025-02-13T15:41:53.874547522Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:41:53.899975 containerd[1504]: time="2025-02-13T15:41:53.899793966Z" level=info msg="CreateContainer within sandbox \"e3e195c2ee33a9c6b256118ca7c9aee7be09c5d31aef2e384eae2286617ffe2a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3\"" Feb 13 15:41:53.902184 containerd[1504]: time="2025-02-13T15:41:53.902139855Z" level=info msg="StartContainer for \"98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3\"" Feb 13 15:41:53.962967 systemd[1]: Started cri-containerd-98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3.scope - libcontainer container 98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3. Feb 13 15:41:54.015577 containerd[1504]: time="2025-02-13T15:41:54.015509479Z" level=info msg="StartContainer for \"98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3\" returns successfully" Feb 13 15:41:54.064650 systemd[1]: run-containerd-runc-k8s.io-98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3-runc.r67LPq.mount: Deactivated successfully. Feb 13 15:41:54.435895 kubelet[2680]: E0213 15:41:54.434282 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-wrxm9" podUID="e253d7d5-901b-47ec-9c74-0cd5ed7324d8" Feb 13 15:41:54.547464 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:41:54.907102 kubelet[2680]: I0213 15:41:54.906836 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8n8q9" podStartSLOduration=5.9067837 podStartE2EDuration="5.9067837s" podCreationTimestamp="2025-02-13 15:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:41:54.90470688 +0000 UTC m=+127.715587963" watchObservedRunningTime="2025-02-13 15:41:54.9067837 +0000 UTC m=+127.717664791" Feb 13 15:41:55.197359 systemd[1]: run-containerd-runc-k8s.io-98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3-runc.WxCMoO.mount: Deactivated successfully. Feb 13 15:41:56.434572 kubelet[2680]: E0213 15:41:56.434309 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-wrxm9" podUID="e253d7d5-901b-47ec-9c74-0cd5ed7324d8" Feb 13 15:41:58.233105 systemd-networkd[1397]: lxc_health: Link UP Feb 13 15:41:58.245980 systemd-networkd[1397]: lxc_health: Gained carrier Feb 13 15:41:58.609760 systemd[1]: Started sshd@33-10.128.0.26:22-218.92.0.190:39939.service - OpenSSH per-connection server daemon (218.92.0.190:39939). Feb 13 15:41:59.747016 systemd[1]: run-containerd-runc-k8s.io-98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3-runc.Ok0a9Z.mount: Deactivated successfully. Feb 13 15:41:59.927519 kubelet[2680]: E0213 15:41:59.927440 2680 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43668->127.0.0.1:41121: write tcp 127.0.0.1:43668->127.0.0.1:41121: write: broken pipe Feb 13 15:42:00.089507 systemd-networkd[1397]: lxc_health: Gained IPv6LL Feb 13 15:42:00.309507 sshd[5434]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:42:01.430788 sshd[5434]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:42:02.103154 systemd[1]: run-containerd-runc-k8s.io-98617e6de73d62a251b609421b3a498e84adc23338130ba7f986d4db8cd3f1a3-runc.tSW752.mount: Deactivated successfully. Feb 13 15:42:02.487438 sshd[5434]: PAM: Permission denied for root from 218.92.0.190 Feb 13 15:42:02.673729 ntpd[1469]: Listen normally on 15 lxc_health [fe80::a843:45ff:fed5:69f6%14]:123 Feb 13 15:42:02.674554 ntpd[1469]: 13 Feb 15:42:02 ntpd[1469]: Listen normally on 15 lxc_health [fe80::a843:45ff:fed5:69f6%14]:123 Feb 13 15:42:03.574485 sshd[5434]: Received disconnect from 218.92.0.190 port 39939:11: [preauth] Feb 13 15:42:03.574485 sshd[5434]: Disconnected from authenticating user root 218.92.0.190 port 39939 [preauth] Feb 13 15:42:03.580338 systemd[1]: sshd@33-10.128.0.26:22-218.92.0.190:39939.service: Deactivated successfully. Feb 13 15:42:04.477211 sshd[4661]: Connection closed by 139.178.68.195 port 42568 Feb 13 15:42:04.480648 sshd-session[4659]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:04.488757 systemd-logind[1480]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:42:04.493445 systemd[1]: sshd@32-10.128.0.26:22-139.178.68.195:42568.service: Deactivated successfully. Feb 13 15:42:04.498547 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:42:04.501097 systemd-logind[1480]: Removed session 29.