Feb 13 15:38:44.114182 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025 Feb 13 15:38:44.114230 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:38:44.114249 kernel: BIOS-provided physical RAM map: Feb 13 15:38:44.114263 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 15:38:44.114276 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 15:38:44.114290 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 15:38:44.114308 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 15:38:44.114338 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 15:38:44.114355 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 15:38:44.114368 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 15:38:44.114380 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 15:38:44.114393 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 15:38:44.114407 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 15:38:44.114421 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 15:38:44.114448 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 15:38:44.114465 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 15:38:44.114480 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 15:38:44.114494 kernel: NX (Execute Disable) protection: active Feb 13 15:38:44.114508 kernel: APIC: Static calls initialized Feb 13 15:38:44.114522 kernel: efi: EFI v2.7 by EDK II Feb 13 15:38:44.114537 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 15:38:44.114551 kernel: random: crng init done Feb 13 15:38:44.114566 kernel: secureboot: Secure boot disabled Feb 13 15:38:44.114580 kernel: SMBIOS 2.4 present. Feb 13 15:38:44.114599 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 15:38:44.114614 kernel: Hypervisor detected: KVM Feb 13 15:38:44.114628 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:38:44.114643 kernel: kvm-clock: using sched offset of 13389725543 cycles Feb 13 15:38:44.114659 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:38:44.114674 kernel: tsc: Detected 2299.998 MHz processor Feb 13 15:38:44.114690 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:38:44.114705 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:38:44.114721 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 15:38:44.114736 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 15:38:44.114757 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:38:44.114771 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 15:38:44.114785 kernel: Using GB pages for direct mapping Feb 13 15:38:44.114807 kernel: ACPI: Early table checksum verification disabled Feb 13 15:38:44.114821 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 15:38:44.114837 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 15:38:44.114861 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 15:38:44.114881 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 15:38:44.114897 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 15:38:44.114921 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 15:38:44.114958 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 15:38:44.114988 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 15:38:44.115006 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 15:38:44.115024 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 15:38:44.115046 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 15:38:44.115064 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 15:38:44.115081 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 15:38:44.115098 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 15:38:44.115116 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 15:38:44.115133 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 15:38:44.115150 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 15:38:44.115167 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 15:38:44.115185 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 15:38:44.115206 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 15:38:44.115223 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:38:44.115240 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:38:44.115257 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 15:38:44.115274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 15:38:44.115292 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 15:38:44.115309 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 15:38:44.116069 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 15:38:44.116089 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 13 15:38:44.116114 kernel: Zone ranges: Feb 13 15:38:44.116132 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:38:44.116150 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:38:44.116168 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:38:44.116185 kernel: Movable zone start for each node Feb 13 15:38:44.116202 kernel: Early memory node ranges Feb 13 15:38:44.116221 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 15:38:44.116238 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 15:38:44.116255 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 15:38:44.116276 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 15:38:44.116293 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 15:38:44.116311 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:38:44.116343 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 15:38:44.116362 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:38:44.116379 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 15:38:44.116396 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 15:38:44.116414 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 15:38:44.116432 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:38:44.116453 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 15:38:44.116471 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:38:44.116488 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:38:44.116506 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:38:44.116524 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:38:44.116541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:38:44.116558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:38:44.116575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:38:44.116593 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:38:44.116614 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:38:44.116632 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 15:38:44.116649 kernel: Booting paravirtualized kernel on KVM Feb 13 15:38:44.116667 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:38:44.116685 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:38:44.116702 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:38:44.116719 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:38:44.116736 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:38:44.116753 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:38:44.116775 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:38:44.116794 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:38:44.116819 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:38:44.116837 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:38:44.116854 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:38:44.116872 kernel: Fallback order for Node 0: 0 Feb 13 15:38:44.116890 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 15:38:44.116907 kernel: Policy zone: Normal Feb 13 15:38:44.116928 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:38:44.116946 kernel: software IO TLB: area num 2. Feb 13 15:38:44.116964 kernel: Memory: 7511320K/7860552K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 348976K reserved, 0K cma-reserved) Feb 13 15:38:44.116982 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:38:44.117000 kernel: Kernel/User page tables isolation: enabled Feb 13 15:38:44.117017 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 15:38:44.117035 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:38:44.117052 kernel: Dynamic Preempt: voluntary Feb 13 15:38:44.118877 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:38:44.118902 kernel: rcu: RCU event tracing is enabled. Feb 13 15:38:44.118920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:38:44.118939 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:38:44.118963 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:38:44.118983 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:38:44.119002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:38:44.119021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:38:44.119040 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:38:44.119063 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:38:44.119083 kernel: Console: colour dummy device 80x25 Feb 13 15:38:44.119101 kernel: printk: console [ttyS0] enabled Feb 13 15:38:44.119119 kernel: ACPI: Core revision 20230628 Feb 13 15:38:44.119138 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:38:44.119157 kernel: x2apic enabled Feb 13 15:38:44.119176 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:38:44.119194 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 15:38:44.119214 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:38:44.119237 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 15:38:44.119255 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 15:38:44.119275 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 15:38:44.119294 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:38:44.119329 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 15:38:44.119350 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 15:38:44.119368 kernel: Spectre V2 : Mitigation: IBRS Feb 13 15:38:44.119386 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:38:44.119402 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:38:44.119425 kernel: RETBleed: Mitigation: IBRS Feb 13 15:38:44.119444 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:38:44.119463 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 15:38:44.119481 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:38:44.119499 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 15:38:44.119517 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:38:44.119536 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:38:44.119554 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:38:44.119576 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:38:44.119594 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:38:44.119613 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 15:38:44.119630 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:38:44.119647 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:38:44.119665 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:38:44.119684 kernel: landlock: Up and running. Feb 13 15:38:44.119702 kernel: SELinux: Initializing. Feb 13 15:38:44.119720 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:38:44.119742 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:38:44.119760 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 15:38:44.119778 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:44.119796 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:44.119821 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:38:44.119839 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 15:38:44.119858 kernel: signal: max sigframe size: 1776 Feb 13 15:38:44.119875 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:38:44.119894 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:38:44.119916 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:38:44.119934 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:38:44.119952 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:38:44.119970 kernel: .... node #0, CPUs: #1 Feb 13 15:38:44.119989 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:38:44.120009 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:38:44.120027 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:38:44.120045 kernel: smpboot: Max logical packages: 1 Feb 13 15:38:44.120063 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 15:38:44.120085 kernel: devtmpfs: initialized Feb 13 15:38:44.120102 kernel: x86/mm: Memory block size: 128MB Feb 13 15:38:44.120120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 15:38:44.120139 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:38:44.120157 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:38:44.120175 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:38:44.120193 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:38:44.120211 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:38:44.120229 kernel: audit: type=2000 audit(1739461123.111:1): state=initialized audit_enabled=0 res=1 Feb 13 15:38:44.120250 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:38:44.120267 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:38:44.120286 kernel: cpuidle: using governor menu Feb 13 15:38:44.120303 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:38:44.120341 kernel: dca service started, version 1.12.1 Feb 13 15:38:44.120358 kernel: PCI: Using configuration type 1 for base access Feb 13 15:38:44.120376 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:38:44.120394 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:38:44.120416 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:38:44.120434 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:38:44.120453 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:38:44.120471 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:38:44.120488 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:38:44.120506 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:38:44.120524 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:38:44.120540 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:38:44.120557 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:38:44.120576 kernel: ACPI: Interpreter enabled Feb 13 15:38:44.120598 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:38:44.120628 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:38:44.120655 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:38:44.120672 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:38:44.120688 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:38:44.120706 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:38:44.120983 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:38:44.121184 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:38:44.121847 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:38:44.121878 kernel: PCI host bridge to bus 0000:00 Feb 13 15:38:44.122479 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:38:44.122955 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:38:44.123149 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:38:44.123356 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 15:38:44.123546 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:38:44.123765 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:38:44.123984 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 15:38:44.124195 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 15:38:44.124430 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:38:44.124664 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 15:38:44.124890 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 15:38:44.125090 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 15:38:44.125288 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:38:44.127233 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 15:38:44.127702 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 15:38:44.128459 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:38:44.128681 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 15:38:44.128902 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 15:38:44.128928 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:38:44.128947 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:38:44.128965 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:38:44.128984 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:38:44.129001 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:38:44.129019 kernel: iommu: Default domain type: Translated Feb 13 15:38:44.129036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:38:44.129053 kernel: efivars: Registered efivars operations Feb 13 15:38:44.129077 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:38:44.129095 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:38:44.129113 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 15:38:44.129131 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 15:38:44.129148 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 15:38:44.129165 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 15:38:44.129182 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 15:38:44.129198 kernel: vgaarb: loaded Feb 13 15:38:44.129216 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:38:44.129240 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:38:44.129258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:38:44.129276 kernel: pnp: PnP ACPI init Feb 13 15:38:44.129294 kernel: pnp: PnP ACPI: found 7 devices Feb 13 15:38:44.129312 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:38:44.129370 kernel: NET: Registered PF_INET protocol family Feb 13 15:38:44.129389 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:38:44.129408 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:38:44.129427 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:38:44.129451 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:38:44.129470 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:38:44.129489 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:38:44.129508 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:38:44.129527 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:38:44.129546 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:38:44.129563 kernel: NET: Registered PF_XDP protocol family Feb 13 15:38:44.129753 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:38:44.129950 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:38:44.130117 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:38:44.130282 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 15:38:44.130504 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:38:44.130530 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:38:44.130550 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:38:44.130570 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 15:38:44.130589 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:38:44.130614 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:38:44.130633 kernel: clocksource: Switched to clocksource tsc Feb 13 15:38:44.130653 kernel: Initialise system trusted keyrings Feb 13 15:38:44.130672 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:38:44.130690 kernel: Key type asymmetric registered Feb 13 15:38:44.130709 kernel: Asymmetric key parser 'x509' registered Feb 13 15:38:44.130727 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:38:44.130746 kernel: io scheduler mq-deadline registered Feb 13 15:38:44.130769 kernel: io scheduler kyber registered Feb 13 15:38:44.130788 kernel: io scheduler bfq registered Feb 13 15:38:44.130815 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:38:44.130835 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 15:38:44.131031 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 15:38:44.131056 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 15:38:44.131244 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 15:38:44.131269 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 15:38:44.133553 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 15:38:44.133593 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:38:44.133614 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:38:44.133633 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:38:44.133650 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 15:38:44.133667 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 15:38:44.133894 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 15:38:44.133922 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:38:44.133942 kernel: i8042: Warning: Keylock active Feb 13 15:38:44.133967 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:38:44.133985 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:38:44.134182 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:38:44.135473 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:38:44.135671 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:38:43 UTC (1739461123) Feb 13 15:38:44.135857 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:38:44.135882 kernel: intel_pstate: CPU model not supported Feb 13 15:38:44.135902 kernel: pstore: Using crash dump compression: deflate Feb 13 15:38:44.135927 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:38:44.135945 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:38:44.135963 kernel: Segment Routing with IPv6 Feb 13 15:38:44.135982 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:38:44.136001 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:38:44.136019 kernel: Key type dns_resolver registered Feb 13 15:38:44.136038 kernel: IPI shorthand broadcast: enabled Feb 13 15:38:44.136073 kernel: sched_clock: Marking stable (875004061, 174283822)->(1104497069, -55209186) Feb 13 15:38:44.136091 kernel: registered taskstats version 1 Feb 13 15:38:44.136114 kernel: Loading compiled-in X.509 certificates Feb 13 15:38:44.136134 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21' Feb 13 15:38:44.136152 kernel: Key type .fscrypt registered Feb 13 15:38:44.136171 kernel: Key type fscrypt-provisioning registered Feb 13 15:38:44.136190 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:38:44.136209 kernel: ima: No architecture policies found Feb 13 15:38:44.136227 kernel: clk: Disabling unused clocks Feb 13 15:38:44.136246 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 15:38:44.136264 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:38:44.136287 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 15:38:44.136305 kernel: Run /init as init process Feb 13 15:38:44.137360 kernel: with arguments: Feb 13 15:38:44.137384 kernel: /init Feb 13 15:38:44.137404 kernel: with environment: Feb 13 15:38:44.137422 kernel: HOME=/ Feb 13 15:38:44.137441 kernel: TERM=linux Feb 13 15:38:44.137460 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:38:44.137480 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:38:44.137508 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:38:44.137533 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:38:44.137554 systemd[1]: Detected virtualization google. Feb 13 15:38:44.137574 systemd[1]: Detected architecture x86-64. Feb 13 15:38:44.137593 systemd[1]: Running in initrd. Feb 13 15:38:44.137613 systemd[1]: No hostname configured, using default hostname. Feb 13 15:38:44.137634 systemd[1]: Hostname set to . Feb 13 15:38:44.137658 systemd[1]: Initializing machine ID from random generator. Feb 13 15:38:44.137678 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:38:44.137698 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:44.137719 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:44.137740 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:38:44.137760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:38:44.137781 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:38:44.137819 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:38:44.137858 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:38:44.137883 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:38:44.137904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:44.137926 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:44.137950 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:38:44.137971 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:38:44.137992 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:38:44.138013 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:38:44.138034 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:44.138055 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:44.138077 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:38:44.138099 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:38:44.138120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:44.138145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:44.138166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:44.138188 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:38:44.138207 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:38:44.138226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:38:44.138252 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:38:44.138273 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:38:44.138294 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:38:44.140412 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:38:44.140452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:44.140519 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 15:38:44.140566 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:44.140588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:44.140615 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:38:44.140637 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:38:44.140659 systemd-journald[184]: Journal started Feb 13 15:38:44.140705 systemd-journald[184]: Runtime Journal (/run/log/journal/3e495886c3b148daab77b006bd1fc8d2) is 8M, max 148.6M, 140.6M free. Feb 13 15:38:44.118741 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 15:38:44.148456 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:38:44.153375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:44.176362 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:38:44.177648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:44.178464 kernel: Bridge firewalling registered Feb 13 15:38:44.178285 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 15:38:44.189608 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:38:44.190968 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:44.191828 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:44.207625 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:44.216604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:38:44.220230 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:44.231922 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:44.236924 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:44.244750 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:44.260543 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:38:44.266935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:38:44.280098 dracut-cmdline[217]: dracut-dracut-053 Feb 13 15:38:44.284776 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:38:44.335456 systemd-resolved[218]: Positive Trust Anchors: Feb 13 15:38:44.335897 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:38:44.335976 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:38:44.341809 systemd-resolved[218]: Defaulting to hostname 'linux'. Feb 13 15:38:44.343538 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:38:44.356609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:44.402368 kernel: SCSI subsystem initialized Feb 13 15:38:44.413372 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:38:44.425367 kernel: iscsi: registered transport (tcp) Feb 13 15:38:44.448522 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:38:44.448624 kernel: QLogic iSCSI HBA Driver Feb 13 15:38:44.501306 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:44.508565 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:38:44.548757 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:38:44.548855 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:38:44.548884 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:38:44.594377 kernel: raid6: avx2x4 gen() 18385 MB/s Feb 13 15:38:44.611393 kernel: raid6: avx2x2 gen() 17944 MB/s Feb 13 15:38:44.628788 kernel: raid6: avx2x1 gen() 14197 MB/s Feb 13 15:38:44.628843 kernel: raid6: using algorithm avx2x4 gen() 18385 MB/s Feb 13 15:38:44.656205 kernel: raid6: .... xor() 6428 MB/s, rmw enabled Feb 13 15:38:44.656309 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:38:44.685373 kernel: xor: automatically using best checksumming function avx Feb 13 15:38:44.857357 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:38:44.871337 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:44.876599 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:44.919787 systemd-udevd[401]: Using default interface naming scheme 'v255'. Feb 13 15:38:44.928387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:44.959586 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:38:44.980524 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Feb 13 15:38:45.014142 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:45.019563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:38:45.120405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:45.129619 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:38:45.185024 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:45.206297 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:45.230863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:45.247336 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:38:45.264840 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:38:45.314357 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:38:45.326397 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:38:45.326491 kernel: AES CTR mode by8 optimization enabled Feb 13 15:38:45.345732 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:38:45.382517 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 15:38:45.380411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:45.380675 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:45.416700 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:45.447497 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 15:38:45.503498 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 15:38:45.504551 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 15:38:45.504795 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 15:38:45.505029 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:38:45.505241 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:38:45.505266 kernel: GPT:17805311 != 25165823 Feb 13 15:38:45.505298 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:38:45.505343 kernel: GPT:17805311 != 25165823 Feb 13 15:38:45.505365 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:38:45.505387 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:38:45.505409 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 15:38:45.473177 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:45.473506 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:45.514082 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:45.562377 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (453) Feb 13 15:38:45.576366 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (458) Feb 13 15:38:45.587778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:45.588525 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:38:45.589040 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:45.657682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 15:38:45.658186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:45.702227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 15:38:45.723854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:38:45.734378 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 15:38:45.755545 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 15:38:45.787580 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:38:45.805720 disk-uuid[542]: Primary Header is updated. Feb 13 15:38:45.805720 disk-uuid[542]: Secondary Entries is updated. Feb 13 15:38:45.805720 disk-uuid[542]: Secondary Header is updated. Feb 13 15:38:45.852494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:38:45.852537 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:38:45.818544 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:45.897994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:46.856342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:38:46.857557 disk-uuid[543]: The operation has completed successfully. Feb 13 15:38:46.941072 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:38:46.941237 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:38:47.001572 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:38:47.031813 sh[566]: Success Feb 13 15:38:47.055580 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:38:47.145827 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:38:47.152917 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:38:47.180950 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:38:47.242496 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85 Feb 13 15:38:47.242533 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:47.242549 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:38:47.242564 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:38:47.242584 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:38:47.262371 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:38:47.268033 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:38:47.277381 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:38:47.283540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:38:47.338505 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:47.338577 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:47.338617 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:38:47.286624 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:38:47.388536 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:38:47.388671 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:38:47.388713 kernel: BTRFS info (device sda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:47.368415 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:38:47.395860 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:38:47.423691 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:38:47.515176 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:47.520573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:38:47.627034 ignition[670]: Ignition 2.20.0 Feb 13 15:38:47.627643 ignition[670]: Stage: fetch-offline Feb 13 15:38:47.630999 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:47.627709 ignition[670]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:47.642498 systemd-networkd[750]: lo: Link UP Feb 13 15:38:47.627724 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:47.642504 systemd-networkd[750]: lo: Gained carrier Feb 13 15:38:47.627896 ignition[670]: parsed url from cmdline: "" Feb 13 15:38:47.644624 systemd-networkd[750]: Enumeration completed Feb 13 15:38:47.627903 ignition[670]: no config URL provided Feb 13 15:38:47.645191 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:47.627912 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:38:47.645199 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:47.627925 ignition[670]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:38:47.646865 systemd-networkd[750]: eth0: Link UP Feb 13 15:38:47.627937 ignition[670]: failed to fetch config: resource requires networking Feb 13 15:38:47.646874 systemd-networkd[750]: eth0: Gained carrier Feb 13 15:38:47.628221 ignition[670]: Ignition finished successfully Feb 13 15:38:47.646888 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:47.741020 ignition[760]: Ignition 2.20.0 Feb 13 15:38:47.660432 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.113/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:38:47.741031 ignition[760]: Stage: fetch Feb 13 15:38:47.662805 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:38:47.741233 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:47.672872 systemd[1]: Reached target network.target - Network. Feb 13 15:38:47.741245 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:47.695662 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:38:47.741408 ignition[760]: parsed url from cmdline: "" Feb 13 15:38:47.753096 unknown[760]: fetched base config from "system" Feb 13 15:38:47.741415 ignition[760]: no config URL provided Feb 13 15:38:47.753107 unknown[760]: fetched base config from "system" Feb 13 15:38:47.741423 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:38:47.753113 unknown[760]: fetched user config from "gcp" Feb 13 15:38:47.741434 ignition[760]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:38:47.755574 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:38:47.741465 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 15:38:47.777579 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:38:47.745147 ignition[760]: GET result: OK Feb 13 15:38:47.829018 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:38:47.745240 ignition[760]: parsing config with SHA512: dcd8b6a38b060009b8b2df29874270d1962050720b9e118ad58a05cd8aa3b4c16dcb0f2f07a7d6984fc8bcb89d1c375948c39143904f00eb598f0591aa54ac35 Feb 13 15:38:47.848575 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:38:47.753694 ignition[760]: fetch: fetch complete Feb 13 15:38:47.903836 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:38:47.753704 ignition[760]: fetch: fetch passed Feb 13 15:38:47.914691 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:47.753761 ignition[760]: Ignition finished successfully Feb 13 15:38:47.934604 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:38:47.826172 ignition[766]: Ignition 2.20.0 Feb 13 15:38:47.952513 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:38:47.826182 ignition[766]: Stage: kargs Feb 13 15:38:47.969547 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:38:47.826455 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:47.986532 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:38:47.826472 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:48.007749 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:38:47.827742 ignition[766]: kargs: kargs passed Feb 13 15:38:47.827799 ignition[766]: Ignition finished successfully Feb 13 15:38:47.901197 ignition[773]: Ignition 2.20.0 Feb 13 15:38:47.901207 ignition[773]: Stage: disks Feb 13 15:38:47.901503 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:47.901528 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:47.902749 ignition[773]: disks: disks passed Feb 13 15:38:47.902810 ignition[773]: Ignition finished successfully Feb 13 15:38:48.062438 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:38:48.231428 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:38:48.264495 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:38:48.382359 kernel: EXT4-fs (sda9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none. Feb 13 15:38:48.383254 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:38:48.392163 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:38:48.416472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:38:48.437911 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:38:48.447949 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:38:48.500743 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (789) Feb 13 15:38:48.500801 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:48.500828 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:48.500853 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:38:48.448021 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:38:48.542691 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:38:48.542744 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:38:48.448058 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:48.513196 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:38:48.552251 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:38:48.573557 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:38:48.697489 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:38:48.708472 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:38:48.718485 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:38:48.728505 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:38:48.884393 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:48.891178 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:38:48.916745 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:38:48.939546 kernel: BTRFS info (device sda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:48.949520 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:38:48.984508 ignition[905]: INFO : Ignition 2.20.0 Feb 13 15:38:48.984508 ignition[905]: INFO : Stage: mount Feb 13 15:38:49.005490 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:49.005490 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:49.005490 ignition[905]: INFO : mount: mount passed Feb 13 15:38:49.005490 ignition[905]: INFO : Ignition finished successfully Feb 13 15:38:48.989411 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:38:48.997455 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:38:49.005962 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:38:49.374619 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 15:38:49.389705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:38:49.436418 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (918) Feb 13 15:38:49.454372 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:38:49.454515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:38:49.454543 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:38:49.477798 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:38:49.477927 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:38:49.481506 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:38:49.521582 ignition[935]: INFO : Ignition 2.20.0 Feb 13 15:38:49.521582 ignition[935]: INFO : Stage: files Feb 13 15:38:49.536540 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:49.536540 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:49.536540 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:38:49.536540 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:38:49.536540 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:38:49.536540 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:38:49.536540 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:38:49.536540 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:38:49.536540 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:38:49.536540 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:49.530225 unknown[935]: wrote ssh authorized keys file for user: core Feb 13 15:38:49.677370 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:38:50.055931 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:38:50.055931 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:38:50.088537 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:38:50.358686 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:38:50.495533 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:38:50.511495 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:38:50.745541 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:38:51.065876 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:38:51.065876 ignition[935]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:38:51.104518 ignition[935]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:51.104518 ignition[935]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:51.104518 ignition[935]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:38:51.104518 ignition[935]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:51.104518 ignition[935]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:51.104518 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:51.104518 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:51.104518 ignition[935]: INFO : files: files passed Feb 13 15:38:51.104518 ignition[935]: INFO : Ignition finished successfully Feb 13 15:38:51.071075 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:38:51.090583 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:38:51.111664 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:38:51.197347 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:38:51.317563 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:51.317563 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:51.197520 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:38:51.355586 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:51.238111 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:51.257987 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:38:51.281607 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:38:51.391868 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:38:51.392038 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:38:51.414624 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:38:51.433733 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:38:51.461869 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:38:51.468673 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:38:51.528718 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:51.536638 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:38:51.579481 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:51.579961 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:51.610916 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:38:51.620994 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:38:51.621267 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:51.665708 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:38:51.666195 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:38:51.691909 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:38:51.709903 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:51.739778 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:51.740246 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:38:51.768905 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:51.789925 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:38:51.810897 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:38:51.821008 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:38:51.845906 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:38:51.846302 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:51.870955 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:51.880978 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:51.909795 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:38:51.909983 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:51.931918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:38:51.932147 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:51.963954 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:38:51.964263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:51.983966 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:38:51.984194 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:38:52.010856 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:38:52.015153 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:38:52.098656 ignition[987]: INFO : Ignition 2.20.0 Feb 13 15:38:52.098656 ignition[987]: INFO : Stage: umount Feb 13 15:38:52.098656 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:52.098656 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:38:52.098656 ignition[987]: INFO : umount: umount passed Feb 13 15:38:52.098656 ignition[987]: INFO : Ignition finished successfully Feb 13 15:38:52.048539 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:38:52.048923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:52.050732 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:38:52.050952 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:52.102453 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:38:52.103894 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:38:52.104043 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:38:52.117306 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:38:52.117514 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:38:52.147679 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:38:52.147921 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:38:52.171846 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:38:52.171943 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:38:52.189768 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:38:52.189865 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:38:52.209765 systemd[1]: Stopped target network.target - Network. Feb 13 15:38:52.228634 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:38:52.228767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:52.238760 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:38:52.264600 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:38:52.269443 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:52.273699 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:38:52.291718 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:38:52.321691 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:38:52.321766 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:52.346719 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:38:52.346783 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:52.364663 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:38:52.364764 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:38:52.372762 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:38:52.372848 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:52.406693 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:38:52.406782 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:52.415032 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:38:52.441711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:38:52.462465 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:38:52.462609 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:38:52.475123 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:38:52.475539 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:38:52.475672 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:38:52.500361 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:38:52.500718 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:38:52.500830 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:38:52.524052 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:38:52.524160 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:52.537465 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:38:52.557466 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:38:52.557610 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:52.568595 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:38:52.568691 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:52.586762 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:38:52.586835 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:52.604574 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:38:52.604669 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:53.075509 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 15:38:52.623738 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:52.644104 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:38:52.644206 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:38:52.644713 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:38:52.644886 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:52.660698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:38:52.660770 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:52.689571 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:38:52.689643 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:52.709572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:38:52.709691 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:52.739488 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:38:52.739614 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:52.769497 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:52.769642 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:52.805533 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:38:52.830482 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:38:52.830618 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:52.849767 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:38:52.849839 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:52.870595 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:38:52.870689 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:52.892583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:52.892693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:52.912896 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:38:52.912981 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:38:52.913609 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:38:52.913729 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:38:52.933113 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:38:52.933262 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:38:52.953827 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:38:52.977596 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:38:53.020590 systemd[1]: Switching root. Feb 13 15:38:53.417551 systemd-journald[184]: Journal stopped Feb 13 15:38:56.105101 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:38:56.105167 kernel: SELinux: policy capability open_perms=1 Feb 13 15:38:56.105191 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:38:56.105210 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:38:56.105228 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:38:56.105248 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:38:56.105269 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:38:56.105289 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:38:56.105311 kernel: audit: type=1403 audit(1739461133.771:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:38:56.105361 systemd[1]: Successfully loaded SELinux policy in 95.831ms. Feb 13 15:38:56.105393 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.183ms. Feb 13 15:38:56.105415 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:38:56.105434 systemd[1]: Detected virtualization google. Feb 13 15:38:56.105453 systemd[1]: Detected architecture x86-64. Feb 13 15:38:56.105481 systemd[1]: Detected first boot. Feb 13 15:38:56.105504 systemd[1]: Initializing machine ID from random generator. Feb 13 15:38:56.105523 zram_generator::config[1031]: No configuration found. Feb 13 15:38:56.105543 kernel: Guest personality initialized and is inactive Feb 13 15:38:56.105562 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 15:38:56.105586 kernel: Initialized host personality Feb 13 15:38:56.105606 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:38:56.105626 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:38:56.105649 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:38:56.105671 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:38:56.105692 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:38:56.105712 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:38:56.105734 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:38:56.105757 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:38:56.105785 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:38:56.105807 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:38:56.105830 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:38:56.105852 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:38:56.105873 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:38:56.105893 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:38:56.105914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:56.105940 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:56.105971 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:38:56.105996 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:38:56.106019 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:38:56.106041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:38:56.106069 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:38:56.106091 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:56.106113 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:38:56.106139 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:38:56.106160 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:38:56.106182 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:38:56.106203 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:56.106225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:38:56.106245 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:38:56.106268 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:38:56.106291 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:38:56.107146 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:38:56.107190 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:38:56.107217 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:56.107242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:56.107275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:56.107298 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:38:56.107355 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:38:56.107381 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:38:56.107405 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:38:56.107428 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:56.107452 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:38:56.107476 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:38:56.107503 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:38:56.107525 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:38:56.107544 systemd[1]: Reached target machines.target - Containers. Feb 13 15:38:56.107563 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:38:56.107585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:56.107608 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:38:56.107630 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:38:56.107651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:56.107672 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:38:56.107698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:56.107730 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:38:56.107753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:56.107774 kernel: ACPI: bus type drm_connector registered Feb 13 15:38:56.107795 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:38:56.107816 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:38:56.107836 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:38:56.107865 kernel: fuse: init (API version 7.39) Feb 13 15:38:56.107888 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:38:56.107912 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:38:56.107937 kernel: loop: module loaded Feb 13 15:38:56.107962 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:38:56.107988 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:38:56.108014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:38:56.108113 systemd-journald[1119]: Collecting audit messages is disabled. Feb 13 15:38:56.108167 systemd-journald[1119]: Journal started Feb 13 15:38:56.108211 systemd-journald[1119]: Runtime Journal (/run/log/journal/1cff1dcfcfc24582ab0c46d201b490c9) is 8M, max 148.6M, 140.6M free. Feb 13 15:38:54.858082 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:38:54.871149 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:38:54.871804 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:38:56.125371 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:38:56.152401 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:38:56.168359 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:38:56.198279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:38:56.213355 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:38:56.220367 systemd[1]: Stopped verity-setup.service. Feb 13 15:38:56.256185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:56.256304 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:38:56.269163 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:38:56.279931 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:38:56.290757 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:38:56.300759 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:38:56.311812 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:38:56.321747 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:38:56.331940 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:38:56.344010 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:56.355910 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:38:56.356286 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:38:56.367956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:56.368249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:56.379898 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:38:56.380182 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:38:56.390923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:56.391210 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:56.402927 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:38:56.403231 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:38:56.413916 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:56.414194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:56.424956 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:56.435970 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:38:56.447989 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:38:56.460087 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:38:56.471939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:56.496771 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:38:56.512474 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:38:56.535533 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:38:56.545537 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:38:56.545613 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:38:56.558497 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:38:56.581632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:38:56.604602 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:38:56.615635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:56.626678 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:38:56.648675 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:38:56.660534 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:38:56.672554 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:38:56.682521 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:38:56.690637 systemd-journald[1119]: Time spent on flushing to /var/log/journal/1cff1dcfcfc24582ab0c46d201b490c9 is 61.094ms for 951 entries. Feb 13 15:38:56.690637 systemd-journald[1119]: System Journal (/var/log/journal/1cff1dcfcfc24582ab0c46d201b490c9) is 8M, max 584.8M, 576.8M free. Feb 13 15:38:56.783307 systemd-journald[1119]: Received client request to flush runtime journal. Feb 13 15:38:56.701666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:56.717800 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:38:56.738579 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:38:56.756601 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:38:56.774423 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:38:56.786905 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:38:56.801401 kernel: loop0: detected capacity change from 0 to 147912 Feb 13 15:38:56.806449 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:38:56.818000 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:38:56.829988 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:38:56.842026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:56.851473 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Feb 13 15:38:56.851506 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Feb 13 15:38:56.866113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:38:56.886564 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:38:56.896353 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:38:56.904750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:56.929551 kernel: loop1: detected capacity change from 0 to 138176 Feb 13 15:38:56.947764 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:38:56.950862 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:38:56.971867 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:38:56.979029 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:38:57.049019 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 15:38:57.103050 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:38:57.123675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:38:57.169616 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:38:57.169654 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:38:57.186964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:57.197393 kernel: loop3: detected capacity change from 0 to 52152 Feb 13 15:38:57.275540 kernel: loop4: detected capacity change from 0 to 147912 Feb 13 15:38:57.337351 kernel: loop5: detected capacity change from 0 to 138176 Feb 13 15:38:57.397359 kernel: loop6: detected capacity change from 0 to 210664 Feb 13 15:38:57.456373 kernel: loop7: detected capacity change from 0 to 52152 Feb 13 15:38:57.492252 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 15:38:57.493337 (sd-merge)[1182]: Merged extensions into '/usr'. Feb 13 15:38:57.500353 systemd[1]: Reload requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:38:57.500553 systemd[1]: Reloading... Feb 13 15:38:57.644644 zram_generator::config[1206]: No configuration found. Feb 13 15:38:57.900194 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:38:57.915276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:58.063948 systemd[1]: Reloading finished in 562 ms. Feb 13 15:38:58.083273 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:38:58.094094 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:38:58.120574 systemd[1]: Starting ensure-sysext.service... Feb 13 15:38:58.136999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:38:58.171885 systemd[1]: Reload requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:38:58.171915 systemd[1]: Reloading... Feb 13 15:38:58.189164 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:38:58.189699 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:38:58.193936 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:38:58.194561 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Feb 13 15:38:58.194681 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Feb 13 15:38:58.205838 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:38:58.205859 systemd-tmpfiles[1251]: Skipping /boot Feb 13 15:38:58.232923 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:38:58.232946 systemd-tmpfiles[1251]: Skipping /boot Feb 13 15:38:58.301348 zram_generator::config[1280]: No configuration found. Feb 13 15:38:58.448626 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:58.542221 systemd[1]: Reloading finished in 369 ms. Feb 13 15:38:58.556837 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:38:58.589355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:58.613834 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:38:58.630017 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:38:58.649754 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:38:58.672484 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:38:58.692218 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:58.711135 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:38:58.729888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:58.730277 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:58.739468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:58.761811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:58.779343 augenrules[1349]: No rules Feb 13 15:38:58.783545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:58.789930 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Feb 13 15:38:58.793617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:58.793906 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:38:58.800788 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:38:58.811532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:58.816758 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:38:58.818116 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:38:58.827785 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:38:58.841273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:58.841619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:58.851225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:58.853015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:58.865559 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:58.865877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:58.875991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:58.888722 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:38:58.900503 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:38:58.955653 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:38:59.003745 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:38:59.006809 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:59.016639 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:38:59.025744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:59.037650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:59.055540 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:38:59.072583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:59.093570 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:59.116577 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:38:59.125625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:59.125940 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:38:59.138509 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:38:59.148528 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:38:59.164576 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:38:59.174498 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:38:59.174785 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:38:59.178518 systemd[1]: Finished ensure-sysext.service. Feb 13 15:38:59.186967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:59.187556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:59.199139 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:38:59.201409 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:38:59.202562 augenrules[1389]: /sbin/augenrules: No change Feb 13 15:38:59.213813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:59.214809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:59.226989 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:59.228787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:59.253210 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:38:59.260753 augenrules[1423]: No rules Feb 13 15:38:59.262759 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:38:59.277271 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1367) Feb 13 15:38:59.283141 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:38:59.283608 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:38:59.305146 systemd-resolved[1335]: Positive Trust Anchors: Feb 13 15:38:59.305167 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:38:59.305232 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:38:59.312132 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Feb 13 15:38:59.333615 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Feb 13 15:38:59.341742 systemd-resolved[1335]: Defaulting to hostname 'linux'. Feb 13 15:38:59.351640 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 15:38:59.362509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:38:59.362623 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:38:59.373063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:38:59.431359 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:38:59.431576 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 15:38:59.507256 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:38:59.507296 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 15:38:59.429200 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:59.510848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:38:59.524355 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:38:59.534708 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 15:38:59.539366 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:38:59.550488 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:38:59.563667 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:38:59.569466 systemd-networkd[1402]: lo: Link UP Feb 13 15:38:59.569963 systemd-networkd[1402]: lo: Gained carrier Feb 13 15:38:59.575305 systemd-networkd[1402]: Enumeration completed Feb 13 15:38:59.575504 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:38:59.576627 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:59.577180 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:59.578846 systemd-networkd[1402]: eth0: Link UP Feb 13 15:38:59.578857 systemd-networkd[1402]: eth0: Gained carrier Feb 13 15:38:59.578890 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:59.587123 systemd[1]: Reached target network.target - Network. Feb 13 15:38:59.588421 systemd-networkd[1402]: eth0: DHCPv4 address 10.128.0.113/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:38:59.606551 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:38:59.627361 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:38:59.630659 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:38:59.650835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:59.662360 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:38:59.687408 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:38:59.702696 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:38:59.711690 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:38:59.733900 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:38:59.764915 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:38:59.766041 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:59.771722 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:38:59.786791 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:38:59.819957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:59.832014 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:38:59.844618 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:38:59.854714 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:38:59.865622 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:38:59.877785 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:38:59.887752 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:38:59.899553 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:38:59.910526 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:38:59.910601 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:38:59.919539 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:38:59.931876 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:38:59.943474 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:38:59.955382 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:38:59.966823 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:38:59.978570 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:39:00.000399 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:39:00.011196 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:39:00.023586 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:39:00.033699 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:39:00.043541 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:39:00.052576 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:39:00.052651 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:39:00.064534 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:39:00.079598 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:39:00.101766 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:39:00.139507 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:39:00.157557 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:39:00.169050 jq[1474]: false Feb 13 15:39:00.168489 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:39:00.176030 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:39:00.192406 coreos-metadata[1472]: Feb 13 15:39:00.192 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 15:39:00.196965 coreos-metadata[1472]: Feb 13 15:39:00.194 INFO Fetch successful Feb 13 15:39:00.196965 coreos-metadata[1472]: Feb 13 15:39:00.194 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 15:39:00.196418 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:39:00.200171 coreos-metadata[1472]: Feb 13 15:39:00.199 INFO Fetch successful Feb 13 15:39:00.200171 coreos-metadata[1472]: Feb 13 15:39:00.200 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 15:39:00.202666 coreos-metadata[1472]: Feb 13 15:39:00.201 INFO Fetch successful Feb 13 15:39:00.203496 coreos-metadata[1472]: Feb 13 15:39:00.203 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 15:39:00.212473 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:39:00.213586 coreos-metadata[1472]: Feb 13 15:39:00.205 INFO Fetch successful Feb 13 15:39:00.215968 extend-filesystems[1477]: Found loop4 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found loop5 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found loop6 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found loop7 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found sda Feb 13 15:39:00.232848 extend-filesystems[1477]: Found sda1 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found sda2 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found sda3 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found usr Feb 13 15:39:00.232848 extend-filesystems[1477]: Found sda4 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found sda6 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found sda7 Feb 13 15:39:00.232848 extend-filesystems[1477]: Found sda9 Feb 13 15:39:00.232848 extend-filesystems[1477]: Checking size of /dev/sda9 Feb 13 15:39:00.388736 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 15:39:00.388795 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 15:39:00.388860 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1370) Feb 13 15:39:00.231015 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:23:52 UTC 2025 (1): Starting Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: ---------------------------------------------------- Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: corporation. Support and training for ntp-4 are Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: available at https://www.nwtime.org/support Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: ---------------------------------------------------- Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: proto: precision = 0.088 usec (-23) Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: basedate set to 2025-02-01 Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: gps base set to 2025-02-02 (week 2352) Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: Listen normally on 3 eth0 10.128.0.113:123 Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: Listen normally on 4 lo [::1]:123 Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: bind(21) AF_INET6 fe80::4001:aff:fe80:71%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:71%2#123 Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: failed to init interface for address fe80::4001:aff:fe80:71%2 Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: Listening on routing socket on fd #21 for interface updates Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:39:00.389189 ntpd[1480]: 13 Feb 15:39:00 ntpd[1480]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:39:00.392924 extend-filesystems[1477]: Resized partition /dev/sda9 Feb 13 15:39:00.248900 dbus-daemon[1473]: [system] SELinux support is enabled Feb 13 15:39:00.251584 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:39:00.428499 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:39:00.428499 extend-filesystems[1495]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:39:00.428499 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 15:39:00.428499 extend-filesystems[1495]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 15:39:00.255892 dbus-daemon[1473]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1402 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:39:00.300761 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:39:00.480292 extend-filesystems[1477]: Resized filesystem in /dev/sda9 Feb 13 15:39:00.331263 ntpd[1480]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:23:52 UTC 2025 (1): Starting Feb 13 15:39:00.343086 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 15:39:00.331301 ntpd[1480]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:39:00.346075 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:39:00.331385 ntpd[1480]: ---------------------------------------------------- Feb 13 15:39:00.516548 update_engine[1501]: I20250213 15:39:00.476794 1501 main.cc:92] Flatcar Update Engine starting Feb 13 15:39:00.516548 update_engine[1501]: I20250213 15:39:00.483590 1501 update_check_scheduler.cc:74] Next update check in 7m33s Feb 13 15:39:00.353736 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:39:00.331527 ntpd[1480]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:39:00.517203 jq[1502]: true Feb 13 15:39:00.381543 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:39:00.331547 ntpd[1480]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:39:00.408737 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:39:00.331562 ntpd[1480]: corporation. Support and training for ntp-4 are Feb 13 15:39:00.440028 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:39:00.331576 ntpd[1480]: available at https://www.nwtime.org/support Feb 13 15:39:00.441512 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:39:00.331590 ntpd[1480]: ---------------------------------------------------- Feb 13 15:39:00.442057 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:39:00.335530 ntpd[1480]: proto: precision = 0.088 usec (-23) Feb 13 15:39:00.443625 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:39:00.335969 ntpd[1480]: basedate set to 2025-02-01 Feb 13 15:39:00.492433 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:39:00.335993 ntpd[1480]: gps base set to 2025-02-02 (week 2352) Feb 13 15:39:00.492955 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:39:00.340092 ntpd[1480]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:39:00.512482 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:39:00.340160 ntpd[1480]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:39:00.513434 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:39:00.344533 ntpd[1480]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:39:00.344594 ntpd[1480]: Listen normally on 3 eth0 10.128.0.113:123 Feb 13 15:39:00.344654 ntpd[1480]: Listen normally on 4 lo [::1]:123 Feb 13 15:39:00.344727 ntpd[1480]: bind(21) AF_INET6 fe80::4001:aff:fe80:71%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:39:00.344763 ntpd[1480]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:71%2#123 Feb 13 15:39:00.344785 ntpd[1480]: failed to init interface for address fe80::4001:aff:fe80:71%2 Feb 13 15:39:00.344836 ntpd[1480]: Listening on routing socket on fd #21 for interface updates Feb 13 15:39:00.362662 ntpd[1480]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:39:00.362709 ntpd[1480]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:39:00.557503 systemd-logind[1498]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:39:00.557546 systemd-logind[1498]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:39:00.557578 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:39:00.566904 systemd-logind[1498]: New seat seat0. Feb 13 15:39:00.576187 (ntainerd)[1510]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:39:00.579816 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:39:00.593082 jq[1509]: true Feb 13 15:39:00.607229 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:39:00.630114 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:39:00.703151 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:39:00.707352 tar[1508]: linux-amd64/helm Feb 13 15:39:00.720021 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:39:00.730795 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:39:00.731076 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:39:00.731304 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:39:00.756775 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:39:00.766537 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:39:00.766823 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:39:00.789906 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:39:00.798348 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:39:00.813112 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:39:00.838822 systemd[1]: Starting sshkeys.service... Feb 13 15:39:00.926198 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:39:00.949866 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:39:01.157350 coreos-metadata[1549]: Feb 13 15:39:01.155 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 15:39:01.166242 coreos-metadata[1549]: Feb 13 15:39:01.164 INFO Fetch failed with 404: resource not found Feb 13 15:39:01.166242 coreos-metadata[1549]: Feb 13 15:39:01.165 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 15:39:01.173753 coreos-metadata[1549]: Feb 13 15:39:01.173 INFO Fetch successful Feb 13 15:39:01.173753 coreos-metadata[1549]: Feb 13 15:39:01.173 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 15:39:01.177930 coreos-metadata[1549]: Feb 13 15:39:01.177 INFO Fetch failed with 404: resource not found Feb 13 15:39:01.177930 coreos-metadata[1549]: Feb 13 15:39:01.177 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 15:39:01.188200 coreos-metadata[1549]: Feb 13 15:39:01.188 INFO Fetch failed with 404: resource not found Feb 13 15:39:01.188200 coreos-metadata[1549]: Feb 13 15:39:01.188 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 15:39:01.196060 coreos-metadata[1549]: Feb 13 15:39:01.195 INFO Fetch successful Feb 13 15:39:01.198175 unknown[1549]: wrote ssh authorized keys file for user: core Feb 13 15:39:01.288141 update-ssh-keys[1553]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:39:01.286438 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:39:01.307403 systemd[1]: Finished sshkeys.service. Feb 13 15:39:01.314252 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:39:01.316844 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:39:01.322176 dbus-daemon[1473]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1541 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:39:01.332094 ntpd[1480]: bind(24) AF_INET6 fe80::4001:aff:fe80:71%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:39:01.333209 ntpd[1480]: 13 Feb 15:39:01 ntpd[1480]: bind(24) AF_INET6 fe80::4001:aff:fe80:71%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:39:01.333209 ntpd[1480]: 13 Feb 15:39:01 ntpd[1480]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:71%2#123 Feb 13 15:39:01.333209 ntpd[1480]: 13 Feb 15:39:01 ntpd[1480]: failed to init interface for address fe80::4001:aff:fe80:71%2 Feb 13 15:39:01.332141 ntpd[1480]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:71%2#123 Feb 13 15:39:01.332164 ntpd[1480]: failed to init interface for address fe80::4001:aff:fe80:71%2 Feb 13 15:39:01.348736 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:39:01.440763 polkitd[1558]: Started polkitd version 121 Feb 13 15:39:01.453175 containerd[1510]: time="2025-02-13T15:39:01.453048843Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:39:01.470096 polkitd[1558]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:39:01.470217 polkitd[1558]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:39:01.473520 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:39:01.475200 polkitd[1558]: Finished loading, compiling and executing 2 rules Feb 13 15:39:01.476633 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:39:01.476902 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:39:01.480292 polkitd[1558]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:39:01.528785 systemd-hostnamed[1541]: Hostname set to (transient) Feb 13 15:39:01.530816 systemd-resolved[1335]: System hostname changed to 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal'. Feb 13 15:39:01.543403 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:39:01.592006 containerd[1510]: time="2025-02-13T15:39:01.591934297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:01.598570 systemd-networkd[1402]: eth0: Gained IPv6LL Feb 13 15:39:01.600942 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:39:01.602305 containerd[1510]: time="2025-02-13T15:39:01.601770517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:01.602305 containerd[1510]: time="2025-02-13T15:39:01.601840565Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:39:01.602305 containerd[1510]: time="2025-02-13T15:39:01.601890050Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:39:01.602305 containerd[1510]: time="2025-02-13T15:39:01.602214865Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:39:01.602305 containerd[1510]: time="2025-02-13T15:39:01.602250269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:01.602601 containerd[1510]: time="2025-02-13T15:39:01.602433678Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:01.602601 containerd[1510]: time="2025-02-13T15:39:01.602461597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:01.603409 containerd[1510]: time="2025-02-13T15:39:01.603038069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:01.603409 containerd[1510]: time="2025-02-13T15:39:01.603068678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:01.603409 containerd[1510]: time="2025-02-13T15:39:01.603110183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:01.603409 containerd[1510]: time="2025-02-13T15:39:01.603129503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:01.603612 containerd[1510]: time="2025-02-13T15:39:01.603360749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:01.604618 containerd[1510]: time="2025-02-13T15:39:01.604576943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:39:01.605569 containerd[1510]: time="2025-02-13T15:39:01.605531557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:39:01.605653 containerd[1510]: time="2025-02-13T15:39:01.605569360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:39:01.606149 containerd[1510]: time="2025-02-13T15:39:01.605775180Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:39:01.606149 containerd[1510]: time="2025-02-13T15:39:01.605869537Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:39:01.612570 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:39:01.613638 containerd[1510]: time="2025-02-13T15:39:01.613556119Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:39:01.614358 containerd[1510]: time="2025-02-13T15:39:01.613782757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:39:01.614358 containerd[1510]: time="2025-02-13T15:39:01.613872406Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:39:01.614358 containerd[1510]: time="2025-02-13T15:39:01.613902675Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:39:01.614358 containerd[1510]: time="2025-02-13T15:39:01.613926782Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:39:01.614358 containerd[1510]: time="2025-02-13T15:39:01.614252075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615305005Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615686297Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615716301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615746678Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615770451Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615794368Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615817781Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615841170Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615866105Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615886791Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615906568Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615929162Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615958175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.616501 containerd[1510]: time="2025-02-13T15:39:01.615980499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616001582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616024003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616044334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616066672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616086364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616107626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616130904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616154564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616174290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616195884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616238901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616263763Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616298661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616500592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617257 containerd[1510]: time="2025-02-13T15:39:01.616529226Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:39:01.617929 containerd[1510]: time="2025-02-13T15:39:01.616619348Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:39:01.617929 containerd[1510]: time="2025-02-13T15:39:01.616652823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:39:01.617929 containerd[1510]: time="2025-02-13T15:39:01.616748725Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:39:01.617929 containerd[1510]: time="2025-02-13T15:39:01.616772808Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:39:01.617929 containerd[1510]: time="2025-02-13T15:39:01.616790686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.617929 containerd[1510]: time="2025-02-13T15:39:01.616825072Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:39:01.617929 containerd[1510]: time="2025-02-13T15:39:01.616845032Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:39:01.617929 containerd[1510]: time="2025-02-13T15:39:01.616863200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:39:01.618575 containerd[1510]: time="2025-02-13T15:39:01.617392184Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:39:01.618575 containerd[1510]: time="2025-02-13T15:39:01.617474337Z" level=info msg="Connect containerd service" Feb 13 15:39:01.618575 containerd[1510]: time="2025-02-13T15:39:01.617541046Z" level=info msg="using legacy CRI server" Feb 13 15:39:01.618575 containerd[1510]: time="2025-02-13T15:39:01.617555533Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:39:01.618575 containerd[1510]: time="2025-02-13T15:39:01.617741944Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:39:01.619083 containerd[1510]: time="2025-02-13T15:39:01.618774016Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:39:01.619466 containerd[1510]: time="2025-02-13T15:39:01.619402168Z" level=info msg="Start subscribing containerd event" Feb 13 15:39:01.619562 containerd[1510]: time="2025-02-13T15:39:01.619489753Z" level=info msg="Start recovering state" Feb 13 15:39:01.619612 containerd[1510]: time="2025-02-13T15:39:01.619580565Z" level=info msg="Start event monitor" Feb 13 15:39:01.619783 containerd[1510]: time="2025-02-13T15:39:01.619613767Z" level=info msg="Start snapshots syncer" Feb 13 15:39:01.619783 containerd[1510]: time="2025-02-13T15:39:01.619630832Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:39:01.619783 containerd[1510]: time="2025-02-13T15:39:01.619643256Z" level=info msg="Start streaming server" Feb 13 15:39:01.621591 containerd[1510]: time="2025-02-13T15:39:01.620599964Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:39:01.621591 containerd[1510]: time="2025-02-13T15:39:01.620684237Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:39:01.621591 containerd[1510]: time="2025-02-13T15:39:01.621551432Z" level=info msg="containerd successfully booted in 0.171961s" Feb 13 15:39:01.624007 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:39:01.636700 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:39:01.659478 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:39:01.676659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:01.697554 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:39:01.713583 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 15:39:01.731478 systemd[1]: Started sshd@0-10.128.0.113:22-139.178.68.195:49370.service - OpenSSH per-connection server daemon (139.178.68.195:49370). Feb 13 15:39:01.743113 init.sh[1588]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 15:39:01.743675 init.sh[1588]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 15:39:01.747358 init.sh[1588]: + /usr/bin/google_instance_setup Feb 13 15:39:01.746774 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:39:01.747402 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:39:01.770857 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:39:01.793838 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:39:01.863500 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:39:01.885832 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:39:01.903157 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:39:01.914830 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:39:02.073743 tar[1508]: linux-amd64/LICENSE Feb 13 15:39:02.074531 tar[1508]: linux-amd64/README.md Feb 13 15:39:02.097963 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:39:02.174098 sshd[1590]: Accepted publickey for core from 139.178.68.195 port 49370 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:02.175416 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:02.189858 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:39:02.210804 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:39:02.242204 systemd-logind[1498]: New session 1 of user core. Feb 13 15:39:02.268544 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:39:02.294396 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:39:02.326263 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:39:02.332300 systemd-logind[1498]: New session c1 of user core. Feb 13 15:39:02.541385 instance-setup[1593]: INFO Running google_set_multiqueue. Feb 13 15:39:02.573636 instance-setup[1593]: INFO Set channels for eth0 to 2. Feb 13 15:39:02.578825 instance-setup[1593]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Feb 13 15:39:02.581020 instance-setup[1593]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Feb 13 15:39:02.581583 instance-setup[1593]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Feb 13 15:39:02.583738 instance-setup[1593]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Feb 13 15:39:02.584589 instance-setup[1593]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Feb 13 15:39:02.587211 instance-setup[1593]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Feb 13 15:39:02.587260 instance-setup[1593]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Feb 13 15:39:02.588999 instance-setup[1593]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Feb 13 15:39:02.603512 instance-setup[1593]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 15:39:02.609133 instance-setup[1593]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 15:39:02.611337 instance-setup[1593]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 15:39:02.611577 instance-setup[1593]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 15:39:02.658362 init.sh[1588]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 15:39:02.674936 systemd[1610]: Queued start job for default target default.target. Feb 13 15:39:02.682875 systemd[1610]: Created slice app.slice - User Application Slice. Feb 13 15:39:02.682933 systemd[1610]: Reached target paths.target - Paths. Feb 13 15:39:02.683017 systemd[1610]: Reached target timers.target - Timers. Feb 13 15:39:02.685634 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:39:02.716026 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:39:02.717366 systemd[1610]: Reached target sockets.target - Sockets. Feb 13 15:39:02.717470 systemd[1610]: Reached target basic.target - Basic System. Feb 13 15:39:02.717547 systemd[1610]: Reached target default.target - Main User Target. Feb 13 15:39:02.717602 systemd[1610]: Startup finished in 367ms. Feb 13 15:39:02.717891 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:39:02.735056 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:39:02.852422 startup-script[1646]: INFO Starting startup scripts. Feb 13 15:39:02.859050 startup-script[1646]: INFO No startup scripts found in metadata. Feb 13 15:39:02.859129 startup-script[1646]: INFO Finished running startup scripts. Feb 13 15:39:02.883555 init.sh[1588]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 15:39:02.884065 init.sh[1588]: + daemon_pids=() Feb 13 15:39:02.884065 init.sh[1588]: + for d in accounts clock_skew network Feb 13 15:39:02.884065 init.sh[1588]: + daemon_pids+=($!) Feb 13 15:39:02.884065 init.sh[1588]: + for d in accounts clock_skew network Feb 13 15:39:02.884230 init.sh[1588]: + daemon_pids+=($!) Feb 13 15:39:02.886351 init.sh[1588]: + for d in accounts clock_skew network Feb 13 15:39:02.886351 init.sh[1588]: + daemon_pids+=($!) Feb 13 15:39:02.886351 init.sh[1588]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 15:39:02.886351 init.sh[1588]: + /usr/bin/systemd-notify --ready Feb 13 15:39:02.886602 init.sh[1653]: + /usr/bin/google_accounts_daemon Feb 13 15:39:02.889407 init.sh[1654]: + /usr/bin/google_clock_skew_daemon Feb 13 15:39:02.889814 init.sh[1655]: + /usr/bin/google_network_daemon Feb 13 15:39:02.916530 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 15:39:02.948107 init.sh[1588]: + wait -n 1653 1654 1655 Feb 13 15:39:02.984861 systemd[1]: Started sshd@1-10.128.0.113:22-139.178.68.195:49380.service - OpenSSH per-connection server daemon (139.178.68.195:49380). Feb 13 15:39:03.318299 google-clock-skew[1654]: INFO Starting Google Clock Skew daemon. Feb 13 15:39:03.333809 google-clock-skew[1654]: INFO Clock drift token has changed: 0. Feb 13 15:39:03.350641 sshd[1658]: Accepted publickey for core from 139.178.68.195 port 49380 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:03.353385 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:03.367471 systemd-logind[1498]: New session 2 of user core. Feb 13 15:39:03.370632 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:39:03.397057 google-networking[1655]: INFO Starting Google Networking daemon. Feb 13 15:39:03.442623 groupadd[1669]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 15:39:03.447615 groupadd[1669]: group added to /etc/gshadow: name=google-sudoers Feb 13 15:39:03.498911 groupadd[1669]: new group: name=google-sudoers, GID=1000 Feb 13 15:39:03.535696 google-accounts[1653]: INFO Starting Google Accounts daemon. Feb 13 15:39:03.549025 google-accounts[1653]: WARNING OS Login not installed. Feb 13 15:39:03.551497 google-accounts[1653]: INFO Creating a new user account for 0. Feb 13 15:39:03.558077 init.sh[1678]: useradd: invalid user name '0': use --badname to ignore Feb 13 15:39:03.558350 google-accounts[1653]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 15:39:03.576438 sshd[1667]: Connection closed by 139.178.68.195 port 49380 Feb 13 15:39:03.577608 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:03.581994 systemd[1]: sshd@1-10.128.0.113:22-139.178.68.195:49380.service: Deactivated successfully. Feb 13 15:39:03.585164 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:39:03.587654 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:39:03.589302 systemd-logind[1498]: Removed session 2. Feb 13 15:39:04.000439 systemd-resolved[1335]: Clock change detected. Flushing caches. Feb 13 15:39:04.001161 google-clock-skew[1654]: INFO Synced system time with hardware clock. Feb 13 15:39:04.044462 systemd[1]: Started sshd@2-10.128.0.113:22-139.178.68.195:49386.service - OpenSSH per-connection server daemon (139.178.68.195:49386). Feb 13 15:39:04.225764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:04.239699 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:39:04.246211 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:04.250373 systemd[1]: Startup finished in 1.057s (kernel) + 9.983s (initrd) + 10.157s (userspace) = 21.198s. Feb 13 15:39:04.360890 sshd[1684]: Accepted publickey for core from 139.178.68.195 port 49386 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:04.363337 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:04.371615 systemd-logind[1498]: New session 3 of user core. Feb 13 15:39:04.379829 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:39:04.576712 sshd[1696]: Connection closed by 139.178.68.195 port 49386 Feb 13 15:39:04.577553 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:04.585036 systemd[1]: sshd@2-10.128.0.113:22-139.178.68.195:49386.service: Deactivated successfully. Feb 13 15:39:04.588044 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:39:04.589814 systemd-logind[1498]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:39:04.591699 systemd-logind[1498]: Removed session 3. Feb 13 15:39:04.737341 ntpd[1480]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:71%2]:123 Feb 13 15:39:04.737780 ntpd[1480]: 13 Feb 15:39:04 ntpd[1480]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:71%2]:123 Feb 13 15:39:05.274055 kubelet[1691]: E0213 15:39:05.273985 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:05.276137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:05.276405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:05.276917 systemd[1]: kubelet.service: Consumed 1.238s CPU time, 246.6M memory peak. Feb 13 15:39:14.641085 systemd[1]: Started sshd@3-10.128.0.113:22-139.178.68.195:39126.service - OpenSSH per-connection server daemon (139.178.68.195:39126). Feb 13 15:39:14.940963 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 39126 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:14.943062 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:14.949814 systemd-logind[1498]: New session 4 of user core. Feb 13 15:39:14.960863 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:39:15.156685 sshd[1711]: Connection closed by 139.178.68.195 port 39126 Feb 13 15:39:15.157747 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:15.163416 systemd[1]: sshd@3-10.128.0.113:22-139.178.68.195:39126.service: Deactivated successfully. Feb 13 15:39:15.165892 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:39:15.167034 systemd-logind[1498]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:39:15.168487 systemd-logind[1498]: Removed session 4. Feb 13 15:39:15.213943 systemd[1]: Started sshd@4-10.128.0.113:22-139.178.68.195:39130.service - OpenSSH per-connection server daemon (139.178.68.195:39130). Feb 13 15:39:15.454306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:39:15.463838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:15.517995 sshd[1717]: Accepted publickey for core from 139.178.68.195 port 39130 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:15.516781 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:15.523751 systemd-logind[1498]: New session 5 of user core. Feb 13 15:39:15.534863 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:39:15.722555 sshd[1722]: Connection closed by 139.178.68.195 port 39130 Feb 13 15:39:15.723823 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:15.736352 systemd[1]: sshd@4-10.128.0.113:22-139.178.68.195:39130.service: Deactivated successfully. Feb 13 15:39:15.741935 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:39:15.747762 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:39:15.758616 systemd-logind[1498]: Removed session 5. Feb 13 15:39:15.763756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:15.774467 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:15.790658 systemd[1]: Started sshd@5-10.128.0.113:22-139.178.68.195:39138.service - OpenSSH per-connection server daemon (139.178.68.195:39138). Feb 13 15:39:15.850530 kubelet[1731]: E0213 15:39:15.850454 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:15.855420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:15.855705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:15.856245 systemd[1]: kubelet.service: Consumed 196ms CPU time, 97.6M memory peak. Feb 13 15:39:16.094260 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 39138 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:16.096030 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:16.103418 systemd-logind[1498]: New session 6 of user core. Feb 13 15:39:16.110800 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:39:16.310279 sshd[1743]: Connection closed by 139.178.68.195 port 39138 Feb 13 15:39:16.311189 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:16.316492 systemd[1]: sshd@5-10.128.0.113:22-139.178.68.195:39138.service: Deactivated successfully. Feb 13 15:39:16.319061 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:39:16.320120 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:39:16.321561 systemd-logind[1498]: Removed session 6. Feb 13 15:39:16.370997 systemd[1]: Started sshd@6-10.128.0.113:22-139.178.68.195:39150.service - OpenSSH per-connection server daemon (139.178.68.195:39150). Feb 13 15:39:16.671332 sshd[1749]: Accepted publickey for core from 139.178.68.195 port 39150 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:16.672986 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:16.679115 systemd-logind[1498]: New session 7 of user core. Feb 13 15:39:16.686745 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:39:16.868358 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:39:16.868928 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:39:16.884828 sudo[1752]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:16.927932 sshd[1751]: Connection closed by 139.178.68.195 port 39150 Feb 13 15:39:16.929408 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:16.935299 systemd[1]: sshd@6-10.128.0.113:22-139.178.68.195:39150.service: Deactivated successfully. Feb 13 15:39:16.937819 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:39:16.938959 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:39:16.940558 systemd-logind[1498]: Removed session 7. Feb 13 15:39:16.985958 systemd[1]: Started sshd@7-10.128.0.113:22-139.178.68.195:44214.service - OpenSSH per-connection server daemon (139.178.68.195:44214). Feb 13 15:39:17.273436 sshd[1758]: Accepted publickey for core from 139.178.68.195 port 44214 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:17.275247 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:17.282906 systemd-logind[1498]: New session 8 of user core. Feb 13 15:39:17.292932 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:39:17.455114 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:39:17.455687 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:39:17.462286 sudo[1762]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:17.479750 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:39:17.480310 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:39:17.504446 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:39:17.547282 augenrules[1784]: No rules Feb 13 15:39:17.549384 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:39:17.549849 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:39:17.552201 sudo[1761]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:17.594688 sshd[1760]: Connection closed by 139.178.68.195 port 44214 Feb 13 15:39:17.595785 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:17.601529 systemd[1]: sshd@7-10.128.0.113:22-139.178.68.195:44214.service: Deactivated successfully. Feb 13 15:39:17.604306 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:39:17.606831 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:39:17.608570 systemd-logind[1498]: Removed session 8. Feb 13 15:39:17.652987 systemd[1]: Started sshd@8-10.128.0.113:22-139.178.68.195:44218.service - OpenSSH per-connection server daemon (139.178.68.195:44218). Feb 13 15:39:17.954807 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 44218 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:39:17.957063 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:17.963833 systemd-logind[1498]: New session 9 of user core. Feb 13 15:39:17.974895 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:39:18.137153 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:39:18.137950 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:39:18.628969 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:39:18.629211 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:39:19.084830 dockerd[1813]: time="2025-02-13T15:39:19.084740615Z" level=info msg="Starting up" Feb 13 15:39:19.252657 dockerd[1813]: time="2025-02-13T15:39:19.252025567Z" level=info msg="Loading containers: start." Feb 13 15:39:19.475581 kernel: Initializing XFRM netlink socket Feb 13 15:39:19.599713 systemd-networkd[1402]: docker0: Link UP Feb 13 15:39:19.632300 dockerd[1813]: time="2025-02-13T15:39:19.632234417Z" level=info msg="Loading containers: done." Feb 13 15:39:19.659144 dockerd[1813]: time="2025-02-13T15:39:19.659053005Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:39:19.659414 dockerd[1813]: time="2025-02-13T15:39:19.659194562Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:39:19.659414 dockerd[1813]: time="2025-02-13T15:39:19.659379668Z" level=info msg="Daemon has completed initialization" Feb 13 15:39:19.704351 dockerd[1813]: time="2025-02-13T15:39:19.704271361Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:39:19.707587 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:39:20.704879 containerd[1510]: time="2025-02-13T15:39:20.704778570Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:39:21.208905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858809598.mount: Deactivated successfully. Feb 13 15:39:22.938710 containerd[1510]: time="2025-02-13T15:39:22.938636538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:22.940275 containerd[1510]: time="2025-02-13T15:39:22.940212707Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32684842" Feb 13 15:39:22.941639 containerd[1510]: time="2025-02-13T15:39:22.941562138Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:22.945358 containerd[1510]: time="2025-02-13T15:39:22.945293344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:22.947304 containerd[1510]: time="2025-02-13T15:39:22.947242293Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 2.242421402s" Feb 13 15:39:22.947304 containerd[1510]: time="2025-02-13T15:39:22.947289261Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:39:22.976480 containerd[1510]: time="2025-02-13T15:39:22.976424192Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:39:24.663652 containerd[1510]: time="2025-02-13T15:39:24.663582330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:24.667534 containerd[1510]: time="2025-02-13T15:39:24.666633414Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29613479" Feb 13 15:39:24.671459 containerd[1510]: time="2025-02-13T15:39:24.671412275Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:24.676960 containerd[1510]: time="2025-02-13T15:39:24.676913173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:24.680375 containerd[1510]: time="2025-02-13T15:39:24.680316827Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.703836052s" Feb 13 15:39:24.680631 containerd[1510]: time="2025-02-13T15:39:24.680376056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:39:24.713022 containerd[1510]: time="2025-02-13T15:39:24.712964815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:39:25.906409 containerd[1510]: time="2025-02-13T15:39:25.906339999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:25.907997 containerd[1510]: time="2025-02-13T15:39:25.907922091Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17784046" Feb 13 15:39:25.909425 containerd[1510]: time="2025-02-13T15:39:25.909379710Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:25.913388 containerd[1510]: time="2025-02-13T15:39:25.913317159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:25.915271 containerd[1510]: time="2025-02-13T15:39:25.914808791Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.201784769s" Feb 13 15:39:25.915271 containerd[1510]: time="2025-02-13T15:39:25.914855153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:39:25.946546 containerd[1510]: time="2025-02-13T15:39:25.946478530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:39:26.024740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:39:26.038154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:26.322329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:26.338150 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:39:26.402193 kubelet[2089]: E0213 15:39:26.402102 2089 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:39:26.404207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:39:26.404452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:39:26.404993 systemd[1]: kubelet.service: Consumed 180ms CPU time, 98.5M memory peak. Feb 13 15:39:27.271096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759684341.mount: Deactivated successfully. Feb 13 15:39:27.856844 containerd[1510]: time="2025-02-13T15:39:27.856769735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:27.858245 containerd[1510]: time="2025-02-13T15:39:27.858163536Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29059753" Feb 13 15:39:27.859960 containerd[1510]: time="2025-02-13T15:39:27.859872795Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:27.863070 containerd[1510]: time="2025-02-13T15:39:27.862986700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:27.864259 containerd[1510]: time="2025-02-13T15:39:27.864073658Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.917531033s" Feb 13 15:39:27.864259 containerd[1510]: time="2025-02-13T15:39:27.864117742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:39:27.894413 containerd[1510]: time="2025-02-13T15:39:27.894353518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:39:28.271646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679917763.mount: Deactivated successfully. Feb 13 15:39:29.354380 containerd[1510]: time="2025-02-13T15:39:29.354297739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:29.356222 containerd[1510]: time="2025-02-13T15:39:29.356146337Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Feb 13 15:39:29.358197 containerd[1510]: time="2025-02-13T15:39:29.358122932Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:29.368545 containerd[1510]: time="2025-02-13T15:39:29.367932727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:29.371620 containerd[1510]: time="2025-02-13T15:39:29.371548818Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.477144803s" Feb 13 15:39:29.371620 containerd[1510]: time="2025-02-13T15:39:29.371606759Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:39:29.405188 containerd[1510]: time="2025-02-13T15:39:29.404456351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:39:29.784676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218063344.mount: Deactivated successfully. Feb 13 15:39:29.792519 containerd[1510]: time="2025-02-13T15:39:29.792438240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:29.793681 containerd[1510]: time="2025-02-13T15:39:29.793555180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Feb 13 15:39:29.795361 containerd[1510]: time="2025-02-13T15:39:29.795291083Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:29.798577 containerd[1510]: time="2025-02-13T15:39:29.798490815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:29.801248 containerd[1510]: time="2025-02-13T15:39:29.799636983Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 394.434248ms" Feb 13 15:39:29.801248 containerd[1510]: time="2025-02-13T15:39:29.799682794Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:39:29.831795 containerd[1510]: time="2025-02-13T15:39:29.831733534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:39:30.213013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735662191.mount: Deactivated successfully. Feb 13 15:39:31.967160 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:39:32.544914 containerd[1510]: time="2025-02-13T15:39:32.544830869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:32.546884 containerd[1510]: time="2025-02-13T15:39:32.546813244Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Feb 13 15:39:32.548238 containerd[1510]: time="2025-02-13T15:39:32.548157601Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:32.552190 containerd[1510]: time="2025-02-13T15:39:32.552117119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:32.553994 containerd[1510]: time="2025-02-13T15:39:32.553817771Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.722035978s" Feb 13 15:39:32.553994 containerd[1510]: time="2025-02-13T15:39:32.553866828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:39:35.708402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:35.708771 systemd[1]: kubelet.service: Consumed 180ms CPU time, 98.5M memory peak. Feb 13 15:39:35.715902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:35.751746 systemd[1]: Reload requested from client PID 2280 ('systemctl') (unit session-9.scope)... Feb 13 15:39:35.751770 systemd[1]: Reloading... Feb 13 15:39:35.936575 zram_generator::config[2325]: No configuration found. Feb 13 15:39:36.084554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:36.229229 systemd[1]: Reloading finished in 476 ms. Feb 13 15:39:36.306751 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:36.311544 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:39:36.311898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:36.311988 systemd[1]: kubelet.service: Consumed 136ms CPU time, 83.3M memory peak. Feb 13 15:39:36.318983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:36.850718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:36.866255 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:36.954407 kubelet[2378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:36.954407 kubelet[2378]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:36.954407 kubelet[2378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:36.954407 kubelet[2378]: I0213 15:39:36.953654 2378 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:37.335552 kubelet[2378]: I0213 15:39:37.334802 2378 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:39:37.335552 kubelet[2378]: I0213 15:39:37.334853 2378 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:37.335552 kubelet[2378]: I0213 15:39:37.335437 2378 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:39:37.375537 kubelet[2378]: I0213 15:39:37.375286 2378 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:37.376488 kubelet[2378]: E0213 15:39:37.376451 2378 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.397979 kubelet[2378]: I0213 15:39:37.397919 2378 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:37.399575 kubelet[2378]: I0213 15:39:37.399463 2378 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:37.399916 kubelet[2378]: I0213 15:39:37.399566 2378 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:39:37.400151 kubelet[2378]: I0213 15:39:37.399937 2378 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:37.400151 kubelet[2378]: I0213 15:39:37.399959 2378 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:39:37.400256 kubelet[2378]: I0213 15:39:37.400202 2378 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:37.401714 kubelet[2378]: I0213 15:39:37.401680 2378 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:39:37.401714 kubelet[2378]: I0213 15:39:37.401720 2378 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:37.401922 kubelet[2378]: I0213 15:39:37.401763 2378 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:39:37.401922 kubelet[2378]: I0213 15:39:37.401796 2378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:37.409221 kubelet[2378]: W0213 15:39:37.408756 2378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.409221 kubelet[2378]: E0213 15:39:37.408855 2378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.409221 kubelet[2378]: W0213 15:39:37.408960 2378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.409221 kubelet[2378]: E0213 15:39:37.409007 2378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.409933 kubelet[2378]: I0213 15:39:37.409813 2378 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:37.412736 kubelet[2378]: I0213 15:39:37.411816 2378 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:37.412736 kubelet[2378]: W0213 15:39:37.411942 2378 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:39:37.413579 kubelet[2378]: I0213 15:39:37.413338 2378 server.go:1264] "Started kubelet" Feb 13 15:39:37.420078 kubelet[2378]: I0213 15:39:37.420023 2378 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:37.421709 kubelet[2378]: I0213 15:39:37.421675 2378 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:39:37.424252 kubelet[2378]: I0213 15:39:37.423654 2378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:37.424252 kubelet[2378]: I0213 15:39:37.424089 2378 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:37.425427 kubelet[2378]: E0213 15:39:37.425240 2378 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal.1823ceb98c4d763c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,UID:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 15:39:37.41330182 +0000 UTC m=+0.538569982,LastTimestamp:2025-02-13 15:39:37.41330182 +0000 UTC m=+0.538569982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,}" Feb 13 15:39:37.425650 kubelet[2378]: I0213 15:39:37.425452 2378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:37.432537 kubelet[2378]: E0213 15:39:37.431978 2378 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" not found" Feb 13 15:39:37.432845 kubelet[2378]: I0213 15:39:37.432822 2378 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:39:37.433145 kubelet[2378]: I0213 15:39:37.433124 2378 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:39:37.433382 kubelet[2378]: I0213 15:39:37.433363 2378 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:39:37.435019 kubelet[2378]: W0213 15:39:37.434940 2378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.435209 kubelet[2378]: E0213 15:39:37.435188 2378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.435310 kubelet[2378]: E0213 15:39:37.435250 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.113:6443: connect: connection refused" interval="200ms" Feb 13 15:39:37.437360 kubelet[2378]: E0213 15:39:37.437209 2378 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:39:37.439527 kubelet[2378]: I0213 15:39:37.437712 2378 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:37.439527 kubelet[2378]: I0213 15:39:37.437862 2378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:37.441736 kubelet[2378]: I0213 15:39:37.441713 2378 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:37.472081 kubelet[2378]: I0213 15:39:37.471998 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:37.474543 kubelet[2378]: I0213 15:39:37.474471 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:37.474788 kubelet[2378]: I0213 15:39:37.474772 2378 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:39:37.474914 kubelet[2378]: I0213 15:39:37.474901 2378 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:39:37.475119 kubelet[2378]: E0213 15:39:37.475090 2378 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:37.479179 kubelet[2378]: I0213 15:39:37.479140 2378 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:39:37.479463 kubelet[2378]: I0213 15:39:37.479440 2378 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:37.479648 kubelet[2378]: I0213 15:39:37.479633 2378 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:37.483264 kubelet[2378]: W0213 15:39:37.483151 2378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.483264 kubelet[2378]: E0213 15:39:37.483263 2378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:37.488900 kubelet[2378]: I0213 15:39:37.488665 2378 policy_none.go:49] "None policy: Start" Feb 13 15:39:37.490838 kubelet[2378]: I0213 15:39:37.490794 2378 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:39:37.490838 kubelet[2378]: I0213 15:39:37.490835 2378 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:37.507960 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:39:37.520231 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:39:37.526030 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:39:37.535563 kubelet[2378]: I0213 15:39:37.535193 2378 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:37.535563 kubelet[2378]: I0213 15:39:37.535592 2378 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:39:37.535563 kubelet[2378]: I0213 15:39:37.535809 2378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:37.537075 kubelet[2378]: E0213 15:39:37.536912 2378 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal.1823ceb98c4d763c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,UID:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 15:39:37.41330182 +0000 UTC m=+0.538569982,LastTimestamp:2025-02-13 15:39:37.41330182 +0000 UTC m=+0.538569982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,}" Feb 13 15:39:37.540488 kubelet[2378]: I0213 15:39:37.540001 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.541068 kubelet[2378]: E0213 15:39:37.540978 2378 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.113:6443/api/v1/nodes\": dial tcp 10.128.0.113:6443: connect: connection refused" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.542916 kubelet[2378]: E0213 15:39:37.542695 2378 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" not found" Feb 13 15:39:37.576369 kubelet[2378]: I0213 15:39:37.576270 2378 topology_manager.go:215] "Topology Admit Handler" podUID="9f071d854af1480628984c2a270b6c62" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.585416 kubelet[2378]: I0213 15:39:37.585354 2378 topology_manager.go:215] "Topology Admit Handler" podUID="ff79047f67dd3f0cf6e7fb0717940f83" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.594882 kubelet[2378]: I0213 15:39:37.593937 2378 topology_manager.go:215] "Topology Admit Handler" podUID="be741f7662e4bdb3aab19d6013fd4666" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.604538 systemd[1]: Created slice kubepods-burstable-pod9f071d854af1480628984c2a270b6c62.slice - libcontainer container kubepods-burstable-pod9f071d854af1480628984c2a270b6c62.slice. Feb 13 15:39:37.619940 systemd[1]: Created slice kubepods-burstable-podff79047f67dd3f0cf6e7fb0717940f83.slice - libcontainer container kubepods-burstable-podff79047f67dd3f0cf6e7fb0717940f83.slice. Feb 13 15:39:37.633620 systemd[1]: Created slice kubepods-burstable-podbe741f7662e4bdb3aab19d6013fd4666.slice - libcontainer container kubepods-burstable-podbe741f7662e4bdb3aab19d6013fd4666.slice. Feb 13 15:39:37.636106 kubelet[2378]: E0213 15:39:37.636054 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.113:6443: connect: connection refused" interval="400ms" Feb 13 15:39:37.734967 kubelet[2378]: I0213 15:39:37.734809 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.734967 kubelet[2378]: I0213 15:39:37.734908 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f071d854af1480628984c2a270b6c62-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"9f071d854af1480628984c2a270b6c62\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.734967 kubelet[2378]: I0213 15:39:37.734962 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f071d854af1480628984c2a270b6c62-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"9f071d854af1480628984c2a270b6c62\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.735374 kubelet[2378]: I0213 15:39:37.735014 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be741f7662e4bdb3aab19d6013fd4666-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"be741f7662e4bdb3aab19d6013fd4666\") " pod="kube-system/kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.735374 kubelet[2378]: I0213 15:39:37.735059 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f071d854af1480628984c2a270b6c62-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"9f071d854af1480628984c2a270b6c62\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.735374 kubelet[2378]: I0213 15:39:37.735094 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.735374 kubelet[2378]: I0213 15:39:37.735123 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.735616 kubelet[2378]: I0213 15:39:37.735161 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.735616 kubelet[2378]: I0213 15:39:37.735194 2378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.749473 kubelet[2378]: I0213 15:39:37.749353 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.750083 kubelet[2378]: E0213 15:39:37.750023 2378 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.113:6443/api/v1/nodes\": dial tcp 10.128.0.113:6443: connect: connection refused" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:37.917461 containerd[1510]: time="2025-02-13T15:39:37.917285901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,Uid:9f071d854af1480628984c2a270b6c62,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:37.932767 containerd[1510]: time="2025-02-13T15:39:37.932678795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,Uid:ff79047f67dd3f0cf6e7fb0717940f83,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:37.939905 containerd[1510]: time="2025-02-13T15:39:37.939843473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,Uid:be741f7662e4bdb3aab19d6013fd4666,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:38.037782 kubelet[2378]: E0213 15:39:38.037691 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.113:6443: connect: connection refused" interval="800ms" Feb 13 15:39:38.156036 kubelet[2378]: I0213 15:39:38.155988 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:38.156662 kubelet[2378]: E0213 15:39:38.156607 2378 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.113:6443/api/v1/nodes\": dial tcp 10.128.0.113:6443: connect: connection refused" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:38.294044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016326278.mount: Deactivated successfully. Feb 13 15:39:38.306397 containerd[1510]: time="2025-02-13T15:39:38.306296064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:38.308940 containerd[1510]: time="2025-02-13T15:39:38.308875408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:38.311153 containerd[1510]: time="2025-02-13T15:39:38.311079335Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 15:39:38.312272 containerd[1510]: time="2025-02-13T15:39:38.312204139Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:38.315335 containerd[1510]: time="2025-02-13T15:39:38.315263908Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:38.317769 containerd[1510]: time="2025-02-13T15:39:38.317489979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:38.317769 containerd[1510]: time="2025-02-13T15:39:38.317663965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:38.320891 containerd[1510]: time="2025-02-13T15:39:38.320842776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:38.323977 containerd[1510]: time="2025-02-13T15:39:38.323114901Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 405.664029ms" Feb 13 15:39:38.325818 containerd[1510]: time="2025-02-13T15:39:38.325480236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 392.636962ms" Feb 13 15:39:38.341806 containerd[1510]: time="2025-02-13T15:39:38.341719381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 401.727163ms" Feb 13 15:39:38.461391 kubelet[2378]: W0213 15:39:38.461262 2378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:38.461391 kubelet[2378]: E0213 15:39:38.461354 2378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:38.503771 kubelet[2378]: W0213 15:39:38.503316 2378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:38.503771 kubelet[2378]: E0213 15:39:38.503434 2378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:38.543216 containerd[1510]: time="2025-02-13T15:39:38.537423211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:38.543216 containerd[1510]: time="2025-02-13T15:39:38.542192698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:38.543216 containerd[1510]: time="2025-02-13T15:39:38.542237675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:38.543216 containerd[1510]: time="2025-02-13T15:39:38.542411111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:38.544043 containerd[1510]: time="2025-02-13T15:39:38.543145859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:38.544043 containerd[1510]: time="2025-02-13T15:39:38.543243313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:38.544043 containerd[1510]: time="2025-02-13T15:39:38.543268773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:38.544043 containerd[1510]: time="2025-02-13T15:39:38.543409248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:38.546939 containerd[1510]: time="2025-02-13T15:39:38.545874214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:38.546939 containerd[1510]: time="2025-02-13T15:39:38.545996487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:38.546939 containerd[1510]: time="2025-02-13T15:39:38.546034216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:38.546939 containerd[1510]: time="2025-02-13T15:39:38.546241320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:38.598228 systemd[1]: Started cri-containerd-3c052a7279987b0f16b588ee7b2f73f73510c7c15a362909dab5f742b78541c2.scope - libcontainer container 3c052a7279987b0f16b588ee7b2f73f73510c7c15a362909dab5f742b78541c2. Feb 13 15:39:38.609325 systemd[1]: Started cri-containerd-13788bb5f005c3f248778fb5bec7a92cff012d1103d0e9b218e7c7d072146487.scope - libcontainer container 13788bb5f005c3f248778fb5bec7a92cff012d1103d0e9b218e7c7d072146487. Feb 13 15:39:38.618387 systemd[1]: Started cri-containerd-3d98dcac90973cb89db33fa8bc9bef657f7b8778fca162efac64f80f64837177.scope - libcontainer container 3d98dcac90973cb89db33fa8bc9bef657f7b8778fca162efac64f80f64837177. Feb 13 15:39:38.697751 kubelet[2378]: W0213 15:39:38.697056 2378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:38.697751 kubelet[2378]: E0213 15:39:38.697273 2378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:38.729453 kubelet[2378]: W0213 15:39:38.729315 2378 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:38.729453 kubelet[2378]: E0213 15:39:38.729419 2378 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.113:6443: connect: connection refused Feb 13 15:39:38.733184 containerd[1510]: time="2025-02-13T15:39:38.733121376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,Uid:9f071d854af1480628984c2a270b6c62,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c052a7279987b0f16b588ee7b2f73f73510c7c15a362909dab5f742b78541c2\"" Feb 13 15:39:38.742238 kubelet[2378]: E0213 15:39:38.741729 2378 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-21291" Feb 13 15:39:38.747832 containerd[1510]: time="2025-02-13T15:39:38.747781001Z" level=info msg="CreateContainer within sandbox \"3c052a7279987b0f16b588ee7b2f73f73510c7c15a362909dab5f742b78541c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:39:38.755473 containerd[1510]: time="2025-02-13T15:39:38.755427296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,Uid:be741f7662e4bdb3aab19d6013fd4666,Namespace:kube-system,Attempt:0,} returns sandbox id \"13788bb5f005c3f248778fb5bec7a92cff012d1103d0e9b218e7c7d072146487\"" Feb 13 15:39:38.760889 kubelet[2378]: E0213 15:39:38.760594 2378 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-21291" Feb 13 15:39:38.764900 containerd[1510]: time="2025-02-13T15:39:38.764716474Z" level=info msg="CreateContainer within sandbox \"13788bb5f005c3f248778fb5bec7a92cff012d1103d0e9b218e7c7d072146487\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:39:38.780562 containerd[1510]: time="2025-02-13T15:39:38.780288932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,Uid:ff79047f67dd3f0cf6e7fb0717940f83,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d98dcac90973cb89db33fa8bc9bef657f7b8778fca162efac64f80f64837177\"" Feb 13 15:39:38.785453 kubelet[2378]: E0213 15:39:38.785001 2378 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flat" Feb 13 15:39:38.786198 containerd[1510]: time="2025-02-13T15:39:38.785755379Z" level=info msg="CreateContainer within sandbox \"3c052a7279987b0f16b588ee7b2f73f73510c7c15a362909dab5f742b78541c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"109480747eda12177e897d01f4c8f27428d7116e904f9881eb034ee71cb45bcc\"" Feb 13 15:39:38.787090 containerd[1510]: time="2025-02-13T15:39:38.787056255Z" level=info msg="StartContainer for \"109480747eda12177e897d01f4c8f27428d7116e904f9881eb034ee71cb45bcc\"" Feb 13 15:39:38.788915 containerd[1510]: time="2025-02-13T15:39:38.788829722Z" level=info msg="CreateContainer within sandbox \"3d98dcac90973cb89db33fa8bc9bef657f7b8778fca162efac64f80f64837177\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:39:38.795845 containerd[1510]: time="2025-02-13T15:39:38.795794872Z" level=info msg="CreateContainer within sandbox \"13788bb5f005c3f248778fb5bec7a92cff012d1103d0e9b218e7c7d072146487\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9ab615230656e8a38d9c311f709528a8929f8a756336d7c7168c8326eda0b94\"" Feb 13 15:39:38.797698 containerd[1510]: time="2025-02-13T15:39:38.796338667Z" level=info msg="StartContainer for \"f9ab615230656e8a38d9c311f709528a8929f8a756336d7c7168c8326eda0b94\"" Feb 13 15:39:38.813284 containerd[1510]: time="2025-02-13T15:39:38.813229659Z" level=info msg="CreateContainer within sandbox \"3d98dcac90973cb89db33fa8bc9bef657f7b8778fca162efac64f80f64837177\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce181ef6c8a73b0768b6224d3583947175d2aa54f87d1c6c475ba4ce0ca2918d\"" Feb 13 15:39:38.815677 containerd[1510]: time="2025-02-13T15:39:38.814015825Z" level=info msg="StartContainer for \"ce181ef6c8a73b0768b6224d3583947175d2aa54f87d1c6c475ba4ce0ca2918d\"" Feb 13 15:39:38.842714 kubelet[2378]: E0213 15:39:38.842662 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.113:6443: connect: connection refused" interval="1.6s" Feb 13 15:39:38.843967 systemd[1]: Started cri-containerd-109480747eda12177e897d01f4c8f27428d7116e904f9881eb034ee71cb45bcc.scope - libcontainer container 109480747eda12177e897d01f4c8f27428d7116e904f9881eb034ee71cb45bcc. Feb 13 15:39:38.868025 systemd[1]: Started cri-containerd-f9ab615230656e8a38d9c311f709528a8929f8a756336d7c7168c8326eda0b94.scope - libcontainer container f9ab615230656e8a38d9c311f709528a8929f8a756336d7c7168c8326eda0b94. Feb 13 15:39:38.886123 systemd[1]: Started cri-containerd-ce181ef6c8a73b0768b6224d3583947175d2aa54f87d1c6c475ba4ce0ca2918d.scope - libcontainer container ce181ef6c8a73b0768b6224d3583947175d2aa54f87d1c6c475ba4ce0ca2918d. Feb 13 15:39:38.962332 kubelet[2378]: I0213 15:39:38.962298 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:38.965539 kubelet[2378]: E0213 15:39:38.963654 2378 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.113:6443/api/v1/nodes\": dial tcp 10.128.0.113:6443: connect: connection refused" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:38.970636 containerd[1510]: time="2025-02-13T15:39:38.970482546Z" level=info msg="StartContainer for \"109480747eda12177e897d01f4c8f27428d7116e904f9881eb034ee71cb45bcc\" returns successfully" Feb 13 15:39:39.035681 containerd[1510]: time="2025-02-13T15:39:39.035622975Z" level=info msg="StartContainer for \"f9ab615230656e8a38d9c311f709528a8929f8a756336d7c7168c8326eda0b94\" returns successfully" Feb 13 15:39:39.042044 containerd[1510]: time="2025-02-13T15:39:39.041970010Z" level=info msg="StartContainer for \"ce181ef6c8a73b0768b6224d3583947175d2aa54f87d1c6c475ba4ce0ca2918d\" returns successfully" Feb 13 15:39:40.573422 kubelet[2378]: I0213 15:39:40.572628 2378 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:43.147146 kubelet[2378]: E0213 15:39:43.147086 2378 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:43.214799 kubelet[2378]: I0213 15:39:43.214545 2378 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:43.413584 kubelet[2378]: I0213 15:39:43.412615 2378 apiserver.go:52] "Watching apiserver" Feb 13 15:39:43.434494 kubelet[2378]: I0213 15:39:43.434385 2378 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:39:43.455105 kubelet[2378]: E0213 15:39:43.453526 2378 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:45.332933 systemd[1]: Reload requested from client PID 2652 ('systemctl') (unit session-9.scope)... Feb 13 15:39:45.332956 systemd[1]: Reloading... Feb 13 15:39:45.487615 zram_generator::config[2700]: No configuration found. Feb 13 15:39:45.641847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:45.832682 systemd[1]: Reloading finished in 498 ms. Feb 13 15:39:45.873581 kubelet[2378]: E0213 15:39:45.871988 2378 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal.1823ceb98c4d763c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,UID:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 15:39:37.41330182 +0000 UTC m=+0.538569982,LastTimestamp:2025-02-13 15:39:37.41330182 +0000 UTC m=+0.538569982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal,}" Feb 13 15:39:45.872458 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:45.880319 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:39:45.881596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:45.881884 systemd[1]: kubelet.service: Consumed 1.083s CPU time, 118.4M memory peak. Feb 13 15:39:45.894917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:46.052060 update_engine[1501]: I20250213 15:39:46.051952 1501 update_attempter.cc:509] Updating boot flags... Feb 13 15:39:46.187663 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2749) Feb 13 15:39:46.274630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:46.295516 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:46.445430 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2751) Feb 13 15:39:46.502002 sudo[2774]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:39:46.503147 sudo[2774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:39:46.581624 kubelet[2759]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:46.581624 kubelet[2759]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:46.581624 kubelet[2759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:46.581624 kubelet[2759]: I0213 15:39:46.576250 2759 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:46.595569 kubelet[2759]: I0213 15:39:46.593914 2759 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:39:46.595569 kubelet[2759]: I0213 15:39:46.593945 2759 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:46.595569 kubelet[2759]: I0213 15:39:46.594424 2759 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:39:46.600598 kubelet[2759]: I0213 15:39:46.597677 2759 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:39:46.600598 kubelet[2759]: I0213 15:39:46.600153 2759 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:46.625013 kubelet[2759]: I0213 15:39:46.624798 2759 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:46.628025 kubelet[2759]: I0213 15:39:46.627969 2759 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:46.630740 kubelet[2759]: I0213 15:39:46.628625 2759 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:39:46.631247 kubelet[2759]: I0213 15:39:46.630997 2759 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:46.631247 kubelet[2759]: I0213 15:39:46.631032 2759 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:39:46.631247 kubelet[2759]: I0213 15:39:46.631112 2759 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:46.631644 kubelet[2759]: I0213 15:39:46.631613 2759 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:39:46.634248 kubelet[2759]: I0213 15:39:46.633006 2759 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:46.634248 kubelet[2759]: I0213 15:39:46.633066 2759 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:39:46.634248 kubelet[2759]: I0213 15:39:46.633098 2759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:46.648451 kubelet[2759]: I0213 15:39:46.647443 2759 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:46.650017 kubelet[2759]: I0213 15:39:46.649495 2759 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:46.654533 kubelet[2759]: I0213 15:39:46.653473 2759 server.go:1264] "Started kubelet" Feb 13 15:39:46.669568 kubelet[2759]: I0213 15:39:46.668607 2759 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:46.684260 kubelet[2759]: I0213 15:39:46.681258 2759 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:46.685284 kubelet[2759]: I0213 15:39:46.685254 2759 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:39:46.689091 kubelet[2759]: I0213 15:39:46.687044 2759 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:46.690388 kubelet[2759]: I0213 15:39:46.689773 2759 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:46.692985 kubelet[2759]: I0213 15:39:46.691483 2759 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:39:46.718531 kubelet[2759]: I0213 15:39:46.714437 2759 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:46.718531 kubelet[2759]: I0213 15:39:46.714614 2759 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:46.721613 kubelet[2759]: I0213 15:39:46.720627 2759 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:39:46.721613 kubelet[2759]: I0213 15:39:46.720942 2759 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:39:46.730422 kubelet[2759]: E0213 15:39:46.728254 2759 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:39:46.731619 kubelet[2759]: I0213 15:39:46.731489 2759 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:46.745615 kubelet[2759]: I0213 15:39:46.745394 2759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:46.748377 kubelet[2759]: I0213 15:39:46.748048 2759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:46.748377 kubelet[2759]: I0213 15:39:46.748086 2759 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:39:46.748377 kubelet[2759]: I0213 15:39:46.748111 2759 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:39:46.748377 kubelet[2759]: E0213 15:39:46.748175 2759 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:46.802585 kubelet[2759]: I0213 15:39:46.802548 2759 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:46.822778 kubelet[2759]: I0213 15:39:46.822738 2759 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:46.822955 kubelet[2759]: I0213 15:39:46.822832 2759 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:46.852395 kubelet[2759]: E0213 15:39:46.849168 2759 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:39:46.868062 kubelet[2759]: I0213 15:39:46.868022 2759 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:39:46.868062 kubelet[2759]: I0213 15:39:46.868051 2759 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:46.868287 kubelet[2759]: I0213 15:39:46.868079 2759 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:46.868343 kubelet[2759]: I0213 15:39:46.868309 2759 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:39:46.868393 kubelet[2759]: I0213 15:39:46.868327 2759 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:39:46.868393 kubelet[2759]: I0213 15:39:46.868360 2759 policy_none.go:49] "None policy: Start" Feb 13 15:39:46.869452 kubelet[2759]: I0213 15:39:46.869420 2759 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:39:46.869452 kubelet[2759]: I0213 15:39:46.869456 2759 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:46.870246 kubelet[2759]: I0213 15:39:46.869805 2759 state_mem.go:75] "Updated machine memory state" Feb 13 15:39:46.881747 kubelet[2759]: I0213 15:39:46.881210 2759 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:46.882358 kubelet[2759]: I0213 15:39:46.881979 2759 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:39:46.885635 kubelet[2759]: I0213 15:39:46.883764 2759 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:47.050301 kubelet[2759]: I0213 15:39:47.050243 2759 topology_manager.go:215] "Topology Admit Handler" podUID="9f071d854af1480628984c2a270b6c62" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.050765 kubelet[2759]: I0213 15:39:47.050731 2759 topology_manager.go:215] "Topology Admit Handler" podUID="ff79047f67dd3f0cf6e7fb0717940f83" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.051117 kubelet[2759]: I0213 15:39:47.051093 2759 topology_manager.go:215] "Topology Admit Handler" podUID="be741f7662e4bdb3aab19d6013fd4666" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.063856 kubelet[2759]: W0213 15:39:47.063810 2759 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:47.066195 kubelet[2759]: W0213 15:39:47.066158 2759 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:47.067438 kubelet[2759]: W0213 15:39:47.067394 2759 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:47.129239 kubelet[2759]: I0213 15:39:47.128956 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be741f7662e4bdb3aab19d6013fd4666-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"be741f7662e4bdb3aab19d6013fd4666\") " pod="kube-system/kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.129239 kubelet[2759]: I0213 15:39:47.129020 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f071d854af1480628984c2a270b6c62-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"9f071d854af1480628984c2a270b6c62\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.130190 kubelet[2759]: I0213 15:39:47.129731 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f071d854af1480628984c2a270b6c62-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"9f071d854af1480628984c2a270b6c62\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.130190 kubelet[2759]: I0213 15:39:47.129892 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.130190 kubelet[2759]: I0213 15:39:47.130050 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.130190 kubelet[2759]: I0213 15:39:47.130095 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f071d854af1480628984c2a270b6c62-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"9f071d854af1480628984c2a270b6c62\") " pod="kube-system/kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.130696 kubelet[2759]: I0213 15:39:47.130363 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.130987 kubelet[2759]: I0213 15:39:47.130815 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.130987 kubelet[2759]: I0213 15:39:47.130922 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff79047f67dd3f0cf6e7fb0717940f83-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" (UID: \"ff79047f67dd3f0cf6e7fb0717940f83\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.462738 sudo[2774]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:47.646996 kubelet[2759]: I0213 15:39:47.646950 2759 apiserver.go:52] "Watching apiserver" Feb 13 15:39:47.721266 kubelet[2759]: I0213 15:39:47.721128 2759 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:39:47.800178 kubelet[2759]: W0213 15:39:47.799693 2759 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:39:47.800178 kubelet[2759]: E0213 15:39:47.799780 2759 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" Feb 13 15:39:47.846191 kubelet[2759]: I0213 15:39:47.845922 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" podStartSLOduration=0.845895857 podStartE2EDuration="845.895857ms" podCreationTimestamp="2025-02-13 15:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:47.83492296 +0000 UTC m=+1.509442988" watchObservedRunningTime="2025-02-13 15:39:47.845895857 +0000 UTC m=+1.520415873" Feb 13 15:39:47.862517 kubelet[2759]: I0213 15:39:47.862075 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" podStartSLOduration=0.861935478 podStartE2EDuration="861.935478ms" podCreationTimestamp="2025-02-13 15:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:47.861457717 +0000 UTC m=+1.535977736" watchObservedRunningTime="2025-02-13 15:39:47.861935478 +0000 UTC m=+1.536455495" Feb 13 15:39:47.862517 kubelet[2759]: I0213 15:39:47.862431 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" podStartSLOduration=0.86242013 podStartE2EDuration="862.42013ms" podCreationTimestamp="2025-02-13 15:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:47.847271197 +0000 UTC m=+1.521791208" watchObservedRunningTime="2025-02-13 15:39:47.86242013 +0000 UTC m=+1.536940146" Feb 13 15:39:49.570268 sudo[1796]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:49.613847 sshd[1795]: Connection closed by 139.178.68.195 port 44218 Feb 13 15:39:49.613547 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:49.623302 systemd[1]: sshd@8-10.128.0.113:22-139.178.68.195:44218.service: Deactivated successfully. Feb 13 15:39:49.626909 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:39:49.627350 systemd[1]: session-9.scope: Consumed 6.505s CPU time, 294.1M memory peak. Feb 13 15:39:49.629728 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:39:49.631380 systemd-logind[1498]: Removed session 9. Feb 13 15:40:00.431944 kubelet[2759]: I0213 15:40:00.431875 2759 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:40:00.432841 containerd[1510]: time="2025-02-13T15:40:00.432790123Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:40:00.433566 kubelet[2759]: I0213 15:40:00.433428 2759 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:40:00.537771 kubelet[2759]: I0213 15:40:00.537714 2759 topology_manager.go:215] "Topology Admit Handler" podUID="49494233-ebba-4fb4-8f94-c8a738cc8d98" podNamespace="kube-system" podName="cilium-lncnj" Feb 13 15:40:00.556714 systemd[1]: Created slice kubepods-burstable-pod49494233_ebba_4fb4_8f94_c8a738cc8d98.slice - libcontainer container kubepods-burstable-pod49494233_ebba_4fb4_8f94_c8a738cc8d98.slice. Feb 13 15:40:00.581581 kubelet[2759]: I0213 15:40:00.580989 2759 topology_manager.go:215] "Topology Admit Handler" podUID="5f019a3c-78d7-446e-b90e-374c66c00c1b" podNamespace="kube-system" podName="cilium-operator-599987898-bblxm" Feb 13 15:40:00.592060 systemd[1]: Created slice kubepods-besteffort-pod5f019a3c_78d7_446e_b90e_374c66c00c1b.slice - libcontainer container kubepods-besteffort-pod5f019a3c_78d7_446e_b90e_374c66c00c1b.slice. Feb 13 15:40:00.597543 kubelet[2759]: I0213 15:40:00.597099 2759 topology_manager.go:215] "Topology Admit Handler" podUID="86ef1b51-7a46-4ef4-bd7c-c742f28937f9" podNamespace="kube-system" podName="kube-proxy-9lq6n" Feb 13 15:40:00.612165 systemd[1]: Created slice kubepods-besteffort-pod86ef1b51_7a46_4ef4_bd7c_c742f28937f9.slice - libcontainer container kubepods-besteffort-pod86ef1b51_7a46_4ef4_bd7c_c742f28937f9.slice. Feb 13 15:40:00.614734 kubelet[2759]: I0213 15:40:00.614700 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdhm\" (UniqueName: \"kubernetes.io/projected/86ef1b51-7a46-4ef4-bd7c-c742f28937f9-kube-api-access-6xdhm\") pod \"kube-proxy-9lq6n\" (UID: \"86ef1b51-7a46-4ef4-bd7c-c742f28937f9\") " pod="kube-system/kube-proxy-9lq6n" Feb 13 15:40:00.615600 kubelet[2759]: I0213 15:40:00.614965 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-etc-cni-netd\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.615600 kubelet[2759]: I0213 15:40:00.615003 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86ef1b51-7a46-4ef4-bd7c-c742f28937f9-xtables-lock\") pod \"kube-proxy-9lq6n\" (UID: \"86ef1b51-7a46-4ef4-bd7c-c742f28937f9\") " pod="kube-system/kube-proxy-9lq6n" Feb 13 15:40:00.615600 kubelet[2759]: I0213 15:40:00.615034 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-lib-modules\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.615600 kubelet[2759]: I0213 15:40:00.615062 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49494233-ebba-4fb4-8f94-c8a738cc8d98-clustermesh-secrets\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.615600 kubelet[2759]: I0213 15:40:00.615092 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8sp9\" (UniqueName: \"kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-kube-api-access-x8sp9\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.615600 kubelet[2759]: I0213 15:40:00.615121 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cni-path\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.616020 kubelet[2759]: I0213 15:40:00.615178 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-xtables-lock\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.616020 kubelet[2759]: I0213 15:40:00.615213 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f019a3c-78d7-446e-b90e-374c66c00c1b-cilium-config-path\") pod \"cilium-operator-599987898-bblxm\" (UID: \"5f019a3c-78d7-446e-b90e-374c66c00c1b\") " pod="kube-system/cilium-operator-599987898-bblxm" Feb 13 15:40:00.616020 kubelet[2759]: I0213 15:40:00.615241 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjx6\" (UniqueName: \"kubernetes.io/projected/5f019a3c-78d7-446e-b90e-374c66c00c1b-kube-api-access-spjx6\") pod \"cilium-operator-599987898-bblxm\" (UID: \"5f019a3c-78d7-446e-b90e-374c66c00c1b\") " pod="kube-system/cilium-operator-599987898-bblxm" Feb 13 15:40:00.616020 kubelet[2759]: I0213 15:40:00.615271 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86ef1b51-7a46-4ef4-bd7c-c742f28937f9-lib-modules\") pod \"kube-proxy-9lq6n\" (UID: \"86ef1b51-7a46-4ef4-bd7c-c742f28937f9\") " pod="kube-system/kube-proxy-9lq6n" Feb 13 15:40:00.616020 kubelet[2759]: I0213 15:40:00.615307 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-hostproc\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.616259 kubelet[2759]: I0213 15:40:00.615335 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-host-proc-sys-net\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.616259 kubelet[2759]: I0213 15:40:00.615360 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-host-proc-sys-kernel\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.616259 kubelet[2759]: I0213 15:40:00.615399 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-config-path\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.616259 kubelet[2759]: I0213 15:40:00.615427 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-run\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.618427 kubelet[2759]: I0213 15:40:00.618223 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-bpf-maps\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.618427 kubelet[2759]: I0213 15:40:00.618292 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-cgroup\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.618427 kubelet[2759]: I0213 15:40:00.618324 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-hubble-tls\") pod \"cilium-lncnj\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " pod="kube-system/cilium-lncnj" Feb 13 15:40:00.618427 kubelet[2759]: I0213 15:40:00.618349 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86ef1b51-7a46-4ef4-bd7c-c742f28937f9-kube-proxy\") pod \"kube-proxy-9lq6n\" (UID: \"86ef1b51-7a46-4ef4-bd7c-c742f28937f9\") " pod="kube-system/kube-proxy-9lq6n" Feb 13 15:40:00.672100 kubelet[2759]: W0213 15:40:00.671552 2759 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.672100 kubelet[2759]: E0213 15:40:00.671660 2759 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.672930 kubelet[2759]: W0213 15:40:00.672676 2759 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.672930 kubelet[2759]: E0213 15:40:00.672718 2759 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.674852 kubelet[2759]: W0213 15:40:00.674734 2759 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.674852 kubelet[2759]: E0213 15:40:00.674784 2759 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.674852 kubelet[2759]: W0213 15:40:00.674757 2759 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.674852 kubelet[2759]: E0213 15:40:00.674810 2759 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.675152 kubelet[2759]: W0213 15:40:00.674846 2759 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:00.675152 kubelet[2759]: E0213 15:40:00.674869 2759 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal' and this object Feb 13 15:40:01.721470 kubelet[2759]: E0213 15:40:01.721397 2759 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.722108 kubelet[2759]: E0213 15:40:01.721594 2759 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f019a3c-78d7-446e-b90e-374c66c00c1b-cilium-config-path podName:5f019a3c-78d7-446e-b90e-374c66c00c1b nodeName:}" failed. No retries permitted until 2025-02-13 15:40:02.221561747 +0000 UTC m=+15.896081762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5f019a3c-78d7-446e-b90e-374c66c00c1b-cilium-config-path") pod "cilium-operator-599987898-bblxm" (UID: "5f019a3c-78d7-446e-b90e-374c66c00c1b") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.722108 kubelet[2759]: E0213 15:40:01.721951 2759 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.722108 kubelet[2759]: E0213 15:40:01.722008 2759 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-config-path podName:49494233-ebba-4fb4-8f94-c8a738cc8d98 nodeName:}" failed. No retries permitted until 2025-02-13 15:40:02.22199183 +0000 UTC m=+15.896511838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-config-path") pod "cilium-lncnj" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.776042 kubelet[2759]: E0213 15:40:01.775979 2759 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.776264 kubelet[2759]: E0213 15:40:01.776060 2759 projected.go:200] Error preparing data for projected volume kube-api-access-x8sp9 for pod kube-system/cilium-lncnj: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.776264 kubelet[2759]: E0213 15:40:01.776164 2759 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-kube-api-access-x8sp9 podName:49494233-ebba-4fb4-8f94-c8a738cc8d98 nodeName:}" failed. No retries permitted until 2025-02-13 15:40:02.276139999 +0000 UTC m=+15.950660005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x8sp9" (UniqueName: "kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-kube-api-access-x8sp9") pod "cilium-lncnj" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.776700 kubelet[2759]: E0213 15:40:01.776557 2759 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.776700 kubelet[2759]: E0213 15:40:01.776598 2759 projected.go:200] Error preparing data for projected volume kube-api-access-6xdhm for pod kube-system/kube-proxy-9lq6n: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.776700 kubelet[2759]: E0213 15:40:01.776666 2759 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86ef1b51-7a46-4ef4-bd7c-c742f28937f9-kube-api-access-6xdhm podName:86ef1b51-7a46-4ef4-bd7c-c742f28937f9 nodeName:}" failed. No retries permitted until 2025-02-13 15:40:02.276643623 +0000 UTC m=+15.951163623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6xdhm" (UniqueName: "kubernetes.io/projected/86ef1b51-7a46-4ef4-bd7c-c742f28937f9-kube-api-access-6xdhm") pod "kube-proxy-9lq6n" (UID: "86ef1b51-7a46-4ef4-bd7c-c742f28937f9") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.777183 kubelet[2759]: E0213 15:40:01.777079 2759 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.777183 kubelet[2759]: E0213 15:40:01.777106 2759 projected.go:200] Error preparing data for projected volume kube-api-access-spjx6 for pod kube-system/cilium-operator-599987898-bblxm: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:01.777183 kubelet[2759]: E0213 15:40:01.777155 2759 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f019a3c-78d7-446e-b90e-374c66c00c1b-kube-api-access-spjx6 podName:5f019a3c-78d7-446e-b90e-374c66c00c1b nodeName:}" failed. No retries permitted until 2025-02-13 15:40:02.277139624 +0000 UTC m=+15.951659633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-spjx6" (UniqueName: "kubernetes.io/projected/5f019a3c-78d7-446e-b90e-374c66c00c1b-kube-api-access-spjx6") pod "cilium-operator-599987898-bblxm" (UID: "5f019a3c-78d7-446e-b90e-374c66c00c1b") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:40:02.364932 containerd[1510]: time="2025-02-13T15:40:02.364870377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lncnj,Uid:49494233-ebba-4fb4-8f94-c8a738cc8d98,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:02.406377 containerd[1510]: time="2025-02-13T15:40:02.406133126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bblxm,Uid:5f019a3c-78d7-446e-b90e-374c66c00c1b,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:02.414422 containerd[1510]: time="2025-02-13T15:40:02.413940087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:02.414422 containerd[1510]: time="2025-02-13T15:40:02.414044315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:02.414422 containerd[1510]: time="2025-02-13T15:40:02.414067696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:02.416298 containerd[1510]: time="2025-02-13T15:40:02.415648682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:02.419979 containerd[1510]: time="2025-02-13T15:40:02.419433374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lq6n,Uid:86ef1b51-7a46-4ef4-bd7c-c742f28937f9,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:02.473846 systemd[1]: Started cri-containerd-c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a.scope - libcontainer container c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a. Feb 13 15:40:02.484771 containerd[1510]: time="2025-02-13T15:40:02.482833114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:02.484771 containerd[1510]: time="2025-02-13T15:40:02.482936249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:02.484771 containerd[1510]: time="2025-02-13T15:40:02.482958705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:02.484771 containerd[1510]: time="2025-02-13T15:40:02.483099316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:02.513028 containerd[1510]: time="2025-02-13T15:40:02.512553043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:02.513028 containerd[1510]: time="2025-02-13T15:40:02.512826597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:02.517684 containerd[1510]: time="2025-02-13T15:40:02.516809869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:02.517684 containerd[1510]: time="2025-02-13T15:40:02.517016523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:02.553862 systemd[1]: Started cri-containerd-307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd.scope - libcontainer container 307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd. Feb 13 15:40:02.595080 containerd[1510]: time="2025-02-13T15:40:02.595016326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lncnj,Uid:49494233-ebba-4fb4-8f94-c8a738cc8d98,Namespace:kube-system,Attempt:0,} returns sandbox id \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\"" Feb 13 15:40:02.606401 containerd[1510]: time="2025-02-13T15:40:02.606339416Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:40:02.615467 systemd[1]: Started cri-containerd-c0361fd7e1566517adafe48c46e57ceabc7ef0307795a2999c7ca6c2c49fe7e3.scope - libcontainer container c0361fd7e1566517adafe48c46e57ceabc7ef0307795a2999c7ca6c2c49fe7e3. Feb 13 15:40:02.688384 containerd[1510]: time="2025-02-13T15:40:02.687375590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lq6n,Uid:86ef1b51-7a46-4ef4-bd7c-c742f28937f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0361fd7e1566517adafe48c46e57ceabc7ef0307795a2999c7ca6c2c49fe7e3\"" Feb 13 15:40:02.696090 containerd[1510]: time="2025-02-13T15:40:02.695983427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bblxm,Uid:5f019a3c-78d7-446e-b90e-374c66c00c1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\"" Feb 13 15:40:02.699433 containerd[1510]: time="2025-02-13T15:40:02.699031339Z" level=info msg="CreateContainer within sandbox \"c0361fd7e1566517adafe48c46e57ceabc7ef0307795a2999c7ca6c2c49fe7e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:40:02.730143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632485049.mount: Deactivated successfully. Feb 13 15:40:02.733764 containerd[1510]: time="2025-02-13T15:40:02.733700742Z" level=info msg="CreateContainer within sandbox \"c0361fd7e1566517adafe48c46e57ceabc7ef0307795a2999c7ca6c2c49fe7e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3085068680c270ab9040c3468d0b42d0b5ace41913ee15208a1a28e0853b2f55\"" Feb 13 15:40:02.735176 containerd[1510]: time="2025-02-13T15:40:02.734912543Z" level=info msg="StartContainer for \"3085068680c270ab9040c3468d0b42d0b5ace41913ee15208a1a28e0853b2f55\"" Feb 13 15:40:02.792780 systemd[1]: Started cri-containerd-3085068680c270ab9040c3468d0b42d0b5ace41913ee15208a1a28e0853b2f55.scope - libcontainer container 3085068680c270ab9040c3468d0b42d0b5ace41913ee15208a1a28e0853b2f55. Feb 13 15:40:02.855453 containerd[1510]: time="2025-02-13T15:40:02.855379157Z" level=info msg="StartContainer for \"3085068680c270ab9040c3468d0b42d0b5ace41913ee15208a1a28e0853b2f55\" returns successfully" Feb 13 15:40:03.857764 kubelet[2759]: I0213 15:40:03.857359 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9lq6n" podStartSLOduration=3.8573322770000003 podStartE2EDuration="3.857332277s" podCreationTimestamp="2025-02-13 15:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:03.856358469 +0000 UTC m=+17.530878484" watchObservedRunningTime="2025-02-13 15:40:03.857332277 +0000 UTC m=+17.531852293" Feb 13 15:40:08.871770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015654947.mount: Deactivated successfully. Feb 13 15:40:11.807357 containerd[1510]: time="2025-02-13T15:40:11.807271861Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:11.808890 containerd[1510]: time="2025-02-13T15:40:11.808810866Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:40:11.810338 containerd[1510]: time="2025-02-13T15:40:11.810269642Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:11.812865 containerd[1510]: time="2025-02-13T15:40:11.812644754Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.206235255s" Feb 13 15:40:11.812865 containerd[1510]: time="2025-02-13T15:40:11.812704423Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:40:11.815955 containerd[1510]: time="2025-02-13T15:40:11.815145395Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:40:11.818022 containerd[1510]: time="2025-02-13T15:40:11.817793019Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:40:11.841661 containerd[1510]: time="2025-02-13T15:40:11.841607448Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\"" Feb 13 15:40:11.842468 containerd[1510]: time="2025-02-13T15:40:11.842403739Z" level=info msg="StartContainer for \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\"" Feb 13 15:40:11.895787 systemd[1]: Started cri-containerd-17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602.scope - libcontainer container 17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602. Feb 13 15:40:11.943136 containerd[1510]: time="2025-02-13T15:40:11.942975732Z" level=info msg="StartContainer for \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\" returns successfully" Feb 13 15:40:11.958097 systemd[1]: cri-containerd-17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602.scope: Deactivated successfully. Feb 13 15:40:12.832023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602-rootfs.mount: Deactivated successfully. Feb 13 15:40:13.775962 containerd[1510]: time="2025-02-13T15:40:13.775873905Z" level=info msg="shim disconnected" id=17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602 namespace=k8s.io Feb 13 15:40:13.775962 containerd[1510]: time="2025-02-13T15:40:13.775950270Z" level=warning msg="cleaning up after shim disconnected" id=17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602 namespace=k8s.io Feb 13 15:40:13.775962 containerd[1510]: time="2025-02-13T15:40:13.775968520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:13.957093 containerd[1510]: time="2025-02-13T15:40:13.954921410Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:40:13.980035 containerd[1510]: time="2025-02-13T15:40:13.979356584Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\"" Feb 13 15:40:13.980633 containerd[1510]: time="2025-02-13T15:40:13.980462070Z" level=info msg="StartContainer for \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\"" Feb 13 15:40:14.039690 systemd[1]: run-containerd-runc-k8s.io-4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d-runc.8iqcaq.mount: Deactivated successfully. Feb 13 15:40:14.052763 systemd[1]: Started cri-containerd-4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d.scope - libcontainer container 4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d. Feb 13 15:40:14.096141 containerd[1510]: time="2025-02-13T15:40:14.095966872Z" level=info msg="StartContainer for \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\" returns successfully" Feb 13 15:40:14.110872 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:40:14.111390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:40:14.112705 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:40:14.122653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:40:14.123117 systemd[1]: cri-containerd-4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d.scope: Deactivated successfully. Feb 13 15:40:14.158683 containerd[1510]: time="2025-02-13T15:40:14.158605676Z" level=info msg="shim disconnected" id=4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d namespace=k8s.io Feb 13 15:40:14.158683 containerd[1510]: time="2025-02-13T15:40:14.158682858Z" level=warning msg="cleaning up after shim disconnected" id=4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d namespace=k8s.io Feb 13 15:40:14.159049 containerd[1510]: time="2025-02-13T15:40:14.158710360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:14.166839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:40:14.953582 containerd[1510]: time="2025-02-13T15:40:14.953325431Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:40:14.977945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d-rootfs.mount: Deactivated successfully. Feb 13 15:40:14.989232 containerd[1510]: time="2025-02-13T15:40:14.989170497Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\"" Feb 13 15:40:14.989979 containerd[1510]: time="2025-02-13T15:40:14.989833431Z" level=info msg="StartContainer for \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\"" Feb 13 15:40:15.045746 systemd[1]: Started cri-containerd-3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e.scope - libcontainer container 3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e. Feb 13 15:40:15.088837 containerd[1510]: time="2025-02-13T15:40:15.088787186Z" level=info msg="StartContainer for \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\" returns successfully" Feb 13 15:40:15.092529 systemd[1]: cri-containerd-3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e.scope: Deactivated successfully. Feb 13 15:40:15.127843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e-rootfs.mount: Deactivated successfully. Feb 13 15:40:15.132825 containerd[1510]: time="2025-02-13T15:40:15.132486897Z" level=info msg="shim disconnected" id=3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e namespace=k8s.io Feb 13 15:40:15.132825 containerd[1510]: time="2025-02-13T15:40:15.132578523Z" level=warning msg="cleaning up after shim disconnected" id=3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e namespace=k8s.io Feb 13 15:40:15.132825 containerd[1510]: time="2025-02-13T15:40:15.132595444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:15.957792 containerd[1510]: time="2025-02-13T15:40:15.957442368Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:40:15.989547 containerd[1510]: time="2025-02-13T15:40:15.989067035Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\"" Feb 13 15:40:15.991991 containerd[1510]: time="2025-02-13T15:40:15.991903477Z" level=info msg="StartContainer for \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\"" Feb 13 15:40:16.042736 systemd[1]: Started cri-containerd-ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05.scope - libcontainer container ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05. Feb 13 15:40:16.076065 systemd[1]: cri-containerd-ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05.scope: Deactivated successfully. Feb 13 15:40:16.081556 containerd[1510]: time="2025-02-13T15:40:16.081292351Z" level=info msg="StartContainer for \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\" returns successfully" Feb 13 15:40:16.111186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05-rootfs.mount: Deactivated successfully. Feb 13 15:40:16.113862 containerd[1510]: time="2025-02-13T15:40:16.113786876Z" level=info msg="shim disconnected" id=ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05 namespace=k8s.io Feb 13 15:40:16.113862 containerd[1510]: time="2025-02-13T15:40:16.113860709Z" level=warning msg="cleaning up after shim disconnected" id=ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05 namespace=k8s.io Feb 13 15:40:16.114130 containerd[1510]: time="2025-02-13T15:40:16.113874158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:40:16.966532 containerd[1510]: time="2025-02-13T15:40:16.965805945Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:40:17.020530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12489650.mount: Deactivated successfully. Feb 13 15:40:17.031138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724565475.mount: Deactivated successfully. Feb 13 15:40:17.033207 containerd[1510]: time="2025-02-13T15:40:17.033154000Z" level=info msg="CreateContainer within sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\"" Feb 13 15:40:17.037910 containerd[1510]: time="2025-02-13T15:40:17.037769015Z" level=info msg="StartContainer for \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\"" Feb 13 15:40:17.095245 systemd[1]: Started cri-containerd-34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483.scope - libcontainer container 34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483. Feb 13 15:40:17.153410 containerd[1510]: time="2025-02-13T15:40:17.152604290Z" level=info msg="StartContainer for \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\" returns successfully" Feb 13 15:40:17.374966 kubelet[2759]: I0213 15:40:17.374913 2759 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:40:17.423114 kubelet[2759]: I0213 15:40:17.423038 2759 topology_manager.go:215] "Topology Admit Handler" podUID="4345f5a9-6e61-4760-a6b2-b28b96565df6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-57xfs" Feb 13 15:40:17.433709 kubelet[2759]: I0213 15:40:17.432793 2759 topology_manager.go:215] "Topology Admit Handler" podUID="522865ed-2ac9-4ad9-9ec7-1fdd12db8180" podNamespace="kube-system" podName="coredns-7db6d8ff4d-f25ws" Feb 13 15:40:17.442398 systemd[1]: Created slice kubepods-burstable-pod4345f5a9_6e61_4760_a6b2_b28b96565df6.slice - libcontainer container kubepods-burstable-pod4345f5a9_6e61_4760_a6b2_b28b96565df6.slice. Feb 13 15:40:17.445631 kubelet[2759]: I0213 15:40:17.444013 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/522865ed-2ac9-4ad9-9ec7-1fdd12db8180-config-volume\") pod \"coredns-7db6d8ff4d-f25ws\" (UID: \"522865ed-2ac9-4ad9-9ec7-1fdd12db8180\") " pod="kube-system/coredns-7db6d8ff4d-f25ws" Feb 13 15:40:17.445631 kubelet[2759]: I0213 15:40:17.444070 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgbh\" (UniqueName: \"kubernetes.io/projected/522865ed-2ac9-4ad9-9ec7-1fdd12db8180-kube-api-access-8kgbh\") pod \"coredns-7db6d8ff4d-f25ws\" (UID: \"522865ed-2ac9-4ad9-9ec7-1fdd12db8180\") " pod="kube-system/coredns-7db6d8ff4d-f25ws" Feb 13 15:40:17.445631 kubelet[2759]: I0213 15:40:17.444117 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swh9c\" (UniqueName: \"kubernetes.io/projected/4345f5a9-6e61-4760-a6b2-b28b96565df6-kube-api-access-swh9c\") pod \"coredns-7db6d8ff4d-57xfs\" (UID: \"4345f5a9-6e61-4760-a6b2-b28b96565df6\") " pod="kube-system/coredns-7db6d8ff4d-57xfs" Feb 13 15:40:17.445631 kubelet[2759]: I0213 15:40:17.444152 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4345f5a9-6e61-4760-a6b2-b28b96565df6-config-volume\") pod \"coredns-7db6d8ff4d-57xfs\" (UID: \"4345f5a9-6e61-4760-a6b2-b28b96565df6\") " pod="kube-system/coredns-7db6d8ff4d-57xfs" Feb 13 15:40:17.459981 systemd[1]: Created slice kubepods-burstable-pod522865ed_2ac9_4ad9_9ec7_1fdd12db8180.slice - libcontainer container kubepods-burstable-pod522865ed_2ac9_4ad9_9ec7_1fdd12db8180.slice. Feb 13 15:40:17.750392 containerd[1510]: time="2025-02-13T15:40:17.750221645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-57xfs,Uid:4345f5a9-6e61-4760-a6b2-b28b96565df6,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:17.771116 containerd[1510]: time="2025-02-13T15:40:17.768069315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f25ws,Uid:522865ed-2ac9-4ad9-9ec7-1fdd12db8180,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:18.013962 kubelet[2759]: I0213 15:40:18.013663 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lncnj" podStartSLOduration=8.803055629 podStartE2EDuration="18.013627967s" podCreationTimestamp="2025-02-13 15:40:00 +0000 UTC" firstStartedPulling="2025-02-13 15:40:02.603955359 +0000 UTC m=+16.278475365" lastFinishedPulling="2025-02-13 15:40:11.814527693 +0000 UTC m=+25.489047703" observedRunningTime="2025-02-13 15:40:18.007589939 +0000 UTC m=+31.682109957" watchObservedRunningTime="2025-02-13 15:40:18.013627967 +0000 UTC m=+31.688147985" Feb 13 15:40:20.765556 containerd[1510]: time="2025-02-13T15:40:20.765439006Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:20.767323 containerd[1510]: time="2025-02-13T15:40:20.767049580Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:40:20.769012 containerd[1510]: time="2025-02-13T15:40:20.768873733Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:20.771114 containerd[1510]: time="2025-02-13T15:40:20.770899999Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.955709819s" Feb 13 15:40:20.771114 containerd[1510]: time="2025-02-13T15:40:20.770950482Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:40:20.775137 containerd[1510]: time="2025-02-13T15:40:20.774876777Z" level=info msg="CreateContainer within sandbox \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:40:20.799133 containerd[1510]: time="2025-02-13T15:40:20.799061119Z" level=info msg="CreateContainer within sandbox \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\"" Feb 13 15:40:20.800604 containerd[1510]: time="2025-02-13T15:40:20.799740490Z" level=info msg="StartContainer for \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\"" Feb 13 15:40:20.851066 systemd[1]: run-containerd-runc-k8s.io-dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807-runc.KC7SNT.mount: Deactivated successfully. Feb 13 15:40:20.865847 systemd[1]: Started cri-containerd-dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807.scope - libcontainer container dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807. Feb 13 15:40:20.908828 containerd[1510]: time="2025-02-13T15:40:20.908775256Z" level=info msg="StartContainer for \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\" returns successfully" Feb 13 15:40:21.006536 kubelet[2759]: I0213 15:40:21.005218 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-bblxm" podStartSLOduration=2.931290386 podStartE2EDuration="21.005188124s" podCreationTimestamp="2025-02-13 15:40:00 +0000 UTC" firstStartedPulling="2025-02-13 15:40:02.698366366 +0000 UTC m=+16.372886371" lastFinishedPulling="2025-02-13 15:40:20.772264104 +0000 UTC m=+34.446784109" observedRunningTime="2025-02-13 15:40:21.001960244 +0000 UTC m=+34.676480261" watchObservedRunningTime="2025-02-13 15:40:21.005188124 +0000 UTC m=+34.679708139" Feb 13 15:40:24.511630 systemd-networkd[1402]: cilium_host: Link UP Feb 13 15:40:24.513685 systemd-networkd[1402]: cilium_net: Link UP Feb 13 15:40:24.515082 systemd-networkd[1402]: cilium_net: Gained carrier Feb 13 15:40:24.515667 systemd-networkd[1402]: cilium_host: Gained carrier Feb 13 15:40:24.527172 systemd-networkd[1402]: cilium_net: Gained IPv6LL Feb 13 15:40:24.696050 systemd-networkd[1402]: cilium_vxlan: Link UP Feb 13 15:40:24.696070 systemd-networkd[1402]: cilium_vxlan: Gained carrier Feb 13 15:40:24.956380 systemd-networkd[1402]: cilium_host: Gained IPv6LL Feb 13 15:40:25.011803 kernel: NET: Registered PF_ALG protocol family Feb 13 15:40:25.886321 systemd-networkd[1402]: lxc_health: Link UP Feb 13 15:40:25.886878 systemd-networkd[1402]: lxc_health: Gained carrier Feb 13 15:40:26.329616 systemd-networkd[1402]: lxc15a3a89b6930: Link UP Feb 13 15:40:26.348148 kernel: eth0: renamed from tmpd2a83 Feb 13 15:40:26.359251 systemd-networkd[1402]: cilium_vxlan: Gained IPv6LL Feb 13 15:40:26.359873 systemd-networkd[1402]: lxc15a3a89b6930: Gained carrier Feb 13 15:40:26.396767 systemd-networkd[1402]: lxcc44cacb5bb56: Link UP Feb 13 15:40:26.406626 kernel: eth0: renamed from tmp6d08c Feb 13 15:40:26.427021 systemd-networkd[1402]: lxcc44cacb5bb56: Gained carrier Feb 13 15:40:27.764827 systemd-networkd[1402]: lxc_health: Gained IPv6LL Feb 13 15:40:27.765279 systemd-networkd[1402]: lxc15a3a89b6930: Gained IPv6LL Feb 13 15:40:28.212479 systemd-networkd[1402]: lxcc44cacb5bb56: Gained IPv6LL Feb 13 15:40:30.737431 ntpd[1480]: Listen normally on 8 cilium_host 192.168.0.1:123 Feb 13 15:40:30.737619 ntpd[1480]: Listen normally on 9 cilium_net [fe80::4443:ecff:fe44:8d80%4]:123 Feb 13 15:40:30.738166 ntpd[1480]: 13 Feb 15:40:30 ntpd[1480]: Listen normally on 8 cilium_host 192.168.0.1:123 Feb 13 15:40:30.738166 ntpd[1480]: 13 Feb 15:40:30 ntpd[1480]: Listen normally on 9 cilium_net [fe80::4443:ecff:fe44:8d80%4]:123 Feb 13 15:40:30.738166 ntpd[1480]: 13 Feb 15:40:30 ntpd[1480]: Listen normally on 10 cilium_host [fe80::3c3d:b2ff:fea0:2a7d%5]:123 Feb 13 15:40:30.738166 ntpd[1480]: 13 Feb 15:40:30 ntpd[1480]: Listen normally on 11 cilium_vxlan [fe80::a4dc:e3ff:fe2d:4b0d%6]:123 Feb 13 15:40:30.738166 ntpd[1480]: 13 Feb 15:40:30 ntpd[1480]: Listen normally on 12 lxc_health [fe80::f885:44ff:fedf:e77d%8]:123 Feb 13 15:40:30.738166 ntpd[1480]: 13 Feb 15:40:30 ntpd[1480]: Listen normally on 13 lxc15a3a89b6930 [fe80::b876:88ff:fefa:cb13%10]:123 Feb 13 15:40:30.738166 ntpd[1480]: 13 Feb 15:40:30 ntpd[1480]: Listen normally on 14 lxcc44cacb5bb56 [fe80::6c7e:aaff:feff:7e27%12]:123 Feb 13 15:40:30.737699 ntpd[1480]: Listen normally on 10 cilium_host [fe80::3c3d:b2ff:fea0:2a7d%5]:123 Feb 13 15:40:30.737764 ntpd[1480]: Listen normally on 11 cilium_vxlan [fe80::a4dc:e3ff:fe2d:4b0d%6]:123 Feb 13 15:40:30.737820 ntpd[1480]: Listen normally on 12 lxc_health [fe80::f885:44ff:fedf:e77d%8]:123 Feb 13 15:40:30.737878 ntpd[1480]: Listen normally on 13 lxc15a3a89b6930 [fe80::b876:88ff:fefa:cb13%10]:123 Feb 13 15:40:30.737937 ntpd[1480]: Listen normally on 14 lxcc44cacb5bb56 [fe80::6c7e:aaff:feff:7e27%12]:123 Feb 13 15:40:31.794157 containerd[1510]: time="2025-02-13T15:40:31.793043702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:31.794157 containerd[1510]: time="2025-02-13T15:40:31.793123074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:31.794157 containerd[1510]: time="2025-02-13T15:40:31.793143648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:31.794157 containerd[1510]: time="2025-02-13T15:40:31.793303643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:31.855379 systemd[1]: run-containerd-runc-k8s.io-d2a83fd0b6d1bcf34379c463956288ebe78c8723ec2404c79e2c275ddb7ddc58-runc.ElpfTb.mount: Deactivated successfully. Feb 13 15:40:31.859885 containerd[1510]: time="2025-02-13T15:40:31.859694017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:31.859885 containerd[1510]: time="2025-02-13T15:40:31.859858598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:31.860065 containerd[1510]: time="2025-02-13T15:40:31.859916274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:31.860172 containerd[1510]: time="2025-02-13T15:40:31.860105804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:31.887768 systemd[1]: Started cri-containerd-d2a83fd0b6d1bcf34379c463956288ebe78c8723ec2404c79e2c275ddb7ddc58.scope - libcontainer container d2a83fd0b6d1bcf34379c463956288ebe78c8723ec2404c79e2c275ddb7ddc58. Feb 13 15:40:31.916139 systemd[1]: Started cri-containerd-6d08c45da4bba15b0897b487eeb9c775b22055a0a1808e5214ddc891a4429331.scope - libcontainer container 6d08c45da4bba15b0897b487eeb9c775b22055a0a1808e5214ddc891a4429331. Feb 13 15:40:32.017575 containerd[1510]: time="2025-02-13T15:40:32.017482855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-57xfs,Uid:4345f5a9-6e61-4760-a6b2-b28b96565df6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2a83fd0b6d1bcf34379c463956288ebe78c8723ec2404c79e2c275ddb7ddc58\"" Feb 13 15:40:32.032367 containerd[1510]: time="2025-02-13T15:40:32.031886275Z" level=info msg="CreateContainer within sandbox \"d2a83fd0b6d1bcf34379c463956288ebe78c8723ec2404c79e2c275ddb7ddc58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:40:32.064693 containerd[1510]: time="2025-02-13T15:40:32.063184970Z" level=info msg="CreateContainer within sandbox \"d2a83fd0b6d1bcf34379c463956288ebe78c8723ec2404c79e2c275ddb7ddc58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"930d0a4ef062885fb6a2cf8d4aaa02fa3995ecd6b08795002266f8333e08a993\"" Feb 13 15:40:32.068394 containerd[1510]: time="2025-02-13T15:40:32.066825825Z" level=info msg="StartContainer for \"930d0a4ef062885fb6a2cf8d4aaa02fa3995ecd6b08795002266f8333e08a993\"" Feb 13 15:40:32.069525 containerd[1510]: time="2025-02-13T15:40:32.069019854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f25ws,Uid:522865ed-2ac9-4ad9-9ec7-1fdd12db8180,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d08c45da4bba15b0897b487eeb9c775b22055a0a1808e5214ddc891a4429331\"" Feb 13 15:40:32.077828 containerd[1510]: time="2025-02-13T15:40:32.077701080Z" level=info msg="CreateContainer within sandbox \"6d08c45da4bba15b0897b487eeb9c775b22055a0a1808e5214ddc891a4429331\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:40:32.104872 containerd[1510]: time="2025-02-13T15:40:32.104817428Z" level=info msg="CreateContainer within sandbox \"6d08c45da4bba15b0897b487eeb9c775b22055a0a1808e5214ddc891a4429331\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b7908f0daf2ff9615a797fcf9b77a5bdd63d908d187d6e70326eec28cbebf49\"" Feb 13 15:40:32.109548 containerd[1510]: time="2025-02-13T15:40:32.107891512Z" level=info msg="StartContainer for \"5b7908f0daf2ff9615a797fcf9b77a5bdd63d908d187d6e70326eec28cbebf49\"" Feb 13 15:40:32.145761 systemd[1]: Started cri-containerd-930d0a4ef062885fb6a2cf8d4aaa02fa3995ecd6b08795002266f8333e08a993.scope - libcontainer container 930d0a4ef062885fb6a2cf8d4aaa02fa3995ecd6b08795002266f8333e08a993. Feb 13 15:40:32.186777 systemd[1]: Started cri-containerd-5b7908f0daf2ff9615a797fcf9b77a5bdd63d908d187d6e70326eec28cbebf49.scope - libcontainer container 5b7908f0daf2ff9615a797fcf9b77a5bdd63d908d187d6e70326eec28cbebf49. Feb 13 15:40:32.236837 containerd[1510]: time="2025-02-13T15:40:32.236762325Z" level=info msg="StartContainer for \"930d0a4ef062885fb6a2cf8d4aaa02fa3995ecd6b08795002266f8333e08a993\" returns successfully" Feb 13 15:40:32.259984 containerd[1510]: time="2025-02-13T15:40:32.258243865Z" level=info msg="StartContainer for \"5b7908f0daf2ff9615a797fcf9b77a5bdd63d908d187d6e70326eec28cbebf49\" returns successfully" Feb 13 15:40:33.032432 kubelet[2759]: I0213 15:40:33.031434 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-f25ws" podStartSLOduration=33.031413104 podStartE2EDuration="33.031413104s" podCreationTimestamp="2025-02-13 15:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:33.031346716 +0000 UTC m=+46.705866733" watchObservedRunningTime="2025-02-13 15:40:33.031413104 +0000 UTC m=+46.705933120" Feb 13 15:40:44.382070 systemd[1]: Started sshd@9-10.128.0.113:22-139.178.68.195:55552.service - OpenSSH per-connection server daemon (139.178.68.195:55552). Feb 13 15:40:44.692344 sshd[4135]: Accepted publickey for core from 139.178.68.195 port 55552 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:44.694736 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:44.702887 systemd-logind[1498]: New session 10 of user core. Feb 13 15:40:44.710934 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:40:45.038236 sshd[4137]: Connection closed by 139.178.68.195 port 55552 Feb 13 15:40:45.039587 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:45.046642 systemd[1]: sshd@9-10.128.0.113:22-139.178.68.195:55552.service: Deactivated successfully. Feb 13 15:40:45.052117 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:40:45.053843 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:40:45.056558 systemd-logind[1498]: Removed session 10. Feb 13 15:40:50.100486 systemd[1]: Started sshd@10-10.128.0.113:22-139.178.68.195:45422.service - OpenSSH per-connection server daemon (139.178.68.195:45422). Feb 13 15:40:50.392930 sshd[4152]: Accepted publickey for core from 139.178.68.195 port 45422 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:50.394890 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:50.401175 systemd-logind[1498]: New session 11 of user core. Feb 13 15:40:50.408748 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:40:50.686414 sshd[4154]: Connection closed by 139.178.68.195 port 45422 Feb 13 15:40:50.686862 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:50.691810 systemd[1]: sshd@10-10.128.0.113:22-139.178.68.195:45422.service: Deactivated successfully. Feb 13 15:40:50.694917 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:40:50.697427 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:40:50.700106 systemd-logind[1498]: Removed session 11. Feb 13 15:40:55.748343 systemd[1]: Started sshd@11-10.128.0.113:22-139.178.68.195:45426.service - OpenSSH per-connection server daemon (139.178.68.195:45426). Feb 13 15:40:56.046532 sshd[4167]: Accepted publickey for core from 139.178.68.195 port 45426 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:56.047295 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:56.055422 systemd-logind[1498]: New session 12 of user core. Feb 13 15:40:56.061888 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:40:56.337483 sshd[4169]: Connection closed by 139.178.68.195 port 45426 Feb 13 15:40:56.338387 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:56.344338 systemd[1]: sshd@11-10.128.0.113:22-139.178.68.195:45426.service: Deactivated successfully. Feb 13 15:40:56.347556 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:40:56.348702 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:40:56.350193 systemd-logind[1498]: Removed session 12. Feb 13 15:41:01.395474 systemd[1]: Started sshd@12-10.128.0.113:22-139.178.68.195:55634.service - OpenSSH per-connection server daemon (139.178.68.195:55634). Feb 13 15:41:01.699807 sshd[4182]: Accepted publickey for core from 139.178.68.195 port 55634 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:01.701918 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:01.708661 systemd-logind[1498]: New session 13 of user core. Feb 13 15:41:01.715763 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:41:02.010465 sshd[4184]: Connection closed by 139.178.68.195 port 55634 Feb 13 15:41:02.011410 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:02.016181 systemd[1]: sshd@12-10.128.0.113:22-139.178.68.195:55634.service: Deactivated successfully. Feb 13 15:41:02.019791 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:41:02.022287 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:41:02.024102 systemd-logind[1498]: Removed session 13. Feb 13 15:41:02.069089 systemd[1]: Started sshd@13-10.128.0.113:22-139.178.68.195:55642.service - OpenSSH per-connection server daemon (139.178.68.195:55642). Feb 13 15:41:02.371857 sshd[4196]: Accepted publickey for core from 139.178.68.195 port 55642 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:02.373900 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:02.381640 systemd-logind[1498]: New session 14 of user core. Feb 13 15:41:02.387757 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:41:02.713822 sshd[4198]: Connection closed by 139.178.68.195 port 55642 Feb 13 15:41:02.714812 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:02.720630 systemd[1]: sshd@13-10.128.0.113:22-139.178.68.195:55642.service: Deactivated successfully. Feb 13 15:41:02.724152 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:41:02.725588 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:41:02.727458 systemd-logind[1498]: Removed session 14. Feb 13 15:41:02.773022 systemd[1]: Started sshd@14-10.128.0.113:22-139.178.68.195:55648.service - OpenSSH per-connection server daemon (139.178.68.195:55648). Feb 13 15:41:03.064751 sshd[4208]: Accepted publickey for core from 139.178.68.195 port 55648 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:03.066891 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:03.075982 systemd-logind[1498]: New session 15 of user core. Feb 13 15:41:03.080769 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:41:03.360704 sshd[4212]: Connection closed by 139.178.68.195 port 55648 Feb 13 15:41:03.361597 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:03.368181 systemd[1]: sshd@14-10.128.0.113:22-139.178.68.195:55648.service: Deactivated successfully. Feb 13 15:41:03.371323 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:41:03.372758 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:41:03.374368 systemd-logind[1498]: Removed session 15. Feb 13 15:41:08.421158 systemd[1]: Started sshd@15-10.128.0.113:22-139.178.68.195:55360.service - OpenSSH per-connection server daemon (139.178.68.195:55360). Feb 13 15:41:08.720041 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 55360 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:08.722397 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:08.729464 systemd-logind[1498]: New session 16 of user core. Feb 13 15:41:08.736876 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:41:09.032484 sshd[4226]: Connection closed by 139.178.68.195 port 55360 Feb 13 15:41:09.033801 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:09.040771 systemd[1]: sshd@15-10.128.0.113:22-139.178.68.195:55360.service: Deactivated successfully. Feb 13 15:41:09.044205 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:41:09.045617 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:41:09.047554 systemd-logind[1498]: Removed session 16. Feb 13 15:41:14.090467 systemd[1]: Started sshd@16-10.128.0.113:22-139.178.68.195:55372.service - OpenSSH per-connection server daemon (139.178.68.195:55372). Feb 13 15:41:14.403065 sshd[4238]: Accepted publickey for core from 139.178.68.195 port 55372 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:14.405802 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:14.412888 systemd-logind[1498]: New session 17 of user core. Feb 13 15:41:14.419766 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:41:14.698696 sshd[4240]: Connection closed by 139.178.68.195 port 55372 Feb 13 15:41:14.699618 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:14.705615 systemd[1]: sshd@16-10.128.0.113:22-139.178.68.195:55372.service: Deactivated successfully. Feb 13 15:41:14.709071 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:41:14.710264 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:41:14.711970 systemd-logind[1498]: Removed session 17. Feb 13 15:41:14.758929 systemd[1]: Started sshd@17-10.128.0.113:22-139.178.68.195:55388.service - OpenSSH per-connection server daemon (139.178.68.195:55388). Feb 13 15:41:15.050640 sshd[4251]: Accepted publickey for core from 139.178.68.195 port 55388 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:15.052562 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:15.060097 systemd-logind[1498]: New session 18 of user core. Feb 13 15:41:15.063756 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:41:15.432912 sshd[4253]: Connection closed by 139.178.68.195 port 55388 Feb 13 15:41:15.434279 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:15.439526 systemd[1]: sshd@17-10.128.0.113:22-139.178.68.195:55388.service: Deactivated successfully. Feb 13 15:41:15.442771 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:41:15.446053 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:41:15.448178 systemd-logind[1498]: Removed session 18. Feb 13 15:41:15.492946 systemd[1]: Started sshd@18-10.128.0.113:22-139.178.68.195:55400.service - OpenSSH per-connection server daemon (139.178.68.195:55400). Feb 13 15:41:15.801196 sshd[4263]: Accepted publickey for core from 139.178.68.195 port 55400 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:15.803102 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:15.809211 systemd-logind[1498]: New session 19 of user core. Feb 13 15:41:15.814782 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:41:17.756581 sshd[4265]: Connection closed by 139.178.68.195 port 55400 Feb 13 15:41:17.756006 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:17.765006 systemd[1]: sshd@18-10.128.0.113:22-139.178.68.195:55400.service: Deactivated successfully. Feb 13 15:41:17.768683 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:41:17.772008 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:41:17.774767 systemd-logind[1498]: Removed session 19. Feb 13 15:41:17.812977 systemd[1]: Started sshd@19-10.128.0.113:22-139.178.68.195:41440.service - OpenSSH per-connection server daemon (139.178.68.195:41440). Feb 13 15:41:18.109849 sshd[4282]: Accepted publickey for core from 139.178.68.195 port 41440 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:18.112967 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:18.121151 systemd-logind[1498]: New session 20 of user core. Feb 13 15:41:18.126786 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:41:18.547868 sshd[4284]: Connection closed by 139.178.68.195 port 41440 Feb 13 15:41:18.548846 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:18.554653 systemd[1]: sshd@19-10.128.0.113:22-139.178.68.195:41440.service: Deactivated successfully. Feb 13 15:41:18.557790 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:41:18.559520 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:41:18.561323 systemd-logind[1498]: Removed session 20. Feb 13 15:41:18.603962 systemd[1]: Started sshd@20-10.128.0.113:22-139.178.68.195:41452.service - OpenSSH per-connection server daemon (139.178.68.195:41452). Feb 13 15:41:18.894557 sshd[4294]: Accepted publickey for core from 139.178.68.195 port 41452 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:18.896235 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:18.903576 systemd-logind[1498]: New session 21 of user core. Feb 13 15:41:18.910946 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:41:19.181453 sshd[4296]: Connection closed by 139.178.68.195 port 41452 Feb 13 15:41:19.183838 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:19.188436 systemd[1]: sshd@20-10.128.0.113:22-139.178.68.195:41452.service: Deactivated successfully. Feb 13 15:41:19.193637 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:41:19.196318 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:41:19.198588 systemd-logind[1498]: Removed session 21. Feb 13 15:41:24.238014 systemd[1]: Started sshd@21-10.128.0.113:22-139.178.68.195:41466.service - OpenSSH per-connection server daemon (139.178.68.195:41466). Feb 13 15:41:24.538394 sshd[4311]: Accepted publickey for core from 139.178.68.195 port 41466 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:24.540533 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:24.547034 systemd-logind[1498]: New session 22 of user core. Feb 13 15:41:24.556862 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:41:24.830002 sshd[4313]: Connection closed by 139.178.68.195 port 41466 Feb 13 15:41:24.830832 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:24.835674 systemd[1]: sshd@21-10.128.0.113:22-139.178.68.195:41466.service: Deactivated successfully. Feb 13 15:41:24.838871 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:41:24.841594 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:41:24.843239 systemd-logind[1498]: Removed session 22. Feb 13 15:41:29.892159 systemd[1]: Started sshd@22-10.128.0.113:22-139.178.68.195:41854.service - OpenSSH per-connection server daemon (139.178.68.195:41854). Feb 13 15:41:30.207332 sshd[4326]: Accepted publickey for core from 139.178.68.195 port 41854 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:30.209717 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:30.216823 systemd-logind[1498]: New session 23 of user core. Feb 13 15:41:30.221802 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:41:30.495573 sshd[4328]: Connection closed by 139.178.68.195 port 41854 Feb 13 15:41:30.496519 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:30.501425 systemd[1]: sshd@22-10.128.0.113:22-139.178.68.195:41854.service: Deactivated successfully. Feb 13 15:41:30.505006 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:41:30.507231 systemd-logind[1498]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:41:30.509343 systemd-logind[1498]: Removed session 23. Feb 13 15:41:35.561145 systemd[1]: Started sshd@23-10.128.0.113:22-139.178.68.195:41870.service - OpenSSH per-connection server daemon (139.178.68.195:41870). Feb 13 15:41:35.858605 sshd[4341]: Accepted publickey for core from 139.178.68.195 port 41870 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:35.861344 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:35.869384 systemd-logind[1498]: New session 24 of user core. Feb 13 15:41:35.874829 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:41:36.162248 sshd[4343]: Connection closed by 139.178.68.195 port 41870 Feb 13 15:41:36.163075 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:36.168953 systemd[1]: sshd@23-10.128.0.113:22-139.178.68.195:41870.service: Deactivated successfully. Feb 13 15:41:36.174090 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:41:36.177166 systemd-logind[1498]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:41:36.179486 systemd-logind[1498]: Removed session 24. Feb 13 15:41:36.221987 systemd[1]: Started sshd@24-10.128.0.113:22-139.178.68.195:41882.service - OpenSSH per-connection server daemon (139.178.68.195:41882). Feb 13 15:41:36.522133 sshd[4354]: Accepted publickey for core from 139.178.68.195 port 41882 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:36.524097 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:36.531924 systemd-logind[1498]: New session 25 of user core. Feb 13 15:41:36.538782 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:41:38.211349 kubelet[2759]: I0213 15:41:38.211244 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-57xfs" podStartSLOduration=98.211216057 podStartE2EDuration="1m38.211216057s" podCreationTimestamp="2025-02-13 15:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:40:33.056182677 +0000 UTC m=+46.730702693" watchObservedRunningTime="2025-02-13 15:41:38.211216057 +0000 UTC m=+111.885736070" Feb 13 15:41:38.231555 containerd[1510]: time="2025-02-13T15:41:38.231474272Z" level=info msg="StopContainer for \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\" with timeout 30 (s)" Feb 13 15:41:38.236280 containerd[1510]: time="2025-02-13T15:41:38.235768571Z" level=info msg="Stop container \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\" with signal terminated" Feb 13 15:41:38.263182 systemd[1]: run-containerd-runc-k8s.io-34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483-runc.ohZrtR.mount: Deactivated successfully. Feb 13 15:41:38.267346 systemd[1]: cri-containerd-dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807.scope: Deactivated successfully. Feb 13 15:41:38.286627 containerd[1510]: time="2025-02-13T15:41:38.286333285Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:41:38.298896 containerd[1510]: time="2025-02-13T15:41:38.298695691Z" level=info msg="StopContainer for \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\" with timeout 2 (s)" Feb 13 15:41:38.299885 containerd[1510]: time="2025-02-13T15:41:38.299767002Z" level=info msg="Stop container \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\" with signal terminated" Feb 13 15:41:38.315639 systemd-networkd[1402]: lxc_health: Link DOWN Feb 13 15:41:38.315660 systemd-networkd[1402]: lxc_health: Lost carrier Feb 13 15:41:38.327049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807-rootfs.mount: Deactivated successfully. Feb 13 15:41:38.339345 systemd[1]: cri-containerd-34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483.scope: Deactivated successfully. Feb 13 15:41:38.340561 systemd[1]: cri-containerd-34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483.scope: Consumed 9.918s CPU time, 126.3M memory peak, 136K read from disk, 13.3M written to disk. Feb 13 15:41:38.354229 containerd[1510]: time="2025-02-13T15:41:38.353915106Z" level=info msg="shim disconnected" id=dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807 namespace=k8s.io Feb 13 15:41:38.354869 containerd[1510]: time="2025-02-13T15:41:38.354194012Z" level=warning msg="cleaning up after shim disconnected" id=dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807 namespace=k8s.io Feb 13 15:41:38.354869 containerd[1510]: time="2025-02-13T15:41:38.354630460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:38.387061 containerd[1510]: time="2025-02-13T15:41:38.386993077Z" level=info msg="StopContainer for \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\" returns successfully" Feb 13 15:41:38.388557 containerd[1510]: time="2025-02-13T15:41:38.388302229Z" level=info msg="StopPodSandbox for \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\"" Feb 13 15:41:38.388557 containerd[1510]: time="2025-02-13T15:41:38.388360261Z" level=info msg="Container to stop \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:38.393036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd-shm.mount: Deactivated successfully. Feb 13 15:41:38.399701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483-rootfs.mount: Deactivated successfully. Feb 13 15:41:38.406851 containerd[1510]: time="2025-02-13T15:41:38.406558117Z" level=info msg="shim disconnected" id=34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483 namespace=k8s.io Feb 13 15:41:38.406851 containerd[1510]: time="2025-02-13T15:41:38.406632413Z" level=warning msg="cleaning up after shim disconnected" id=34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483 namespace=k8s.io Feb 13 15:41:38.406851 containerd[1510]: time="2025-02-13T15:41:38.406646193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:38.417963 systemd[1]: cri-containerd-307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd.scope: Deactivated successfully. Feb 13 15:41:38.433233 containerd[1510]: time="2025-02-13T15:41:38.432940637Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:41:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:41:38.439583 containerd[1510]: time="2025-02-13T15:41:38.439131496Z" level=info msg="StopContainer for \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\" returns successfully" Feb 13 15:41:38.442560 containerd[1510]: time="2025-02-13T15:41:38.440696023Z" level=info msg="StopPodSandbox for \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\"" Feb 13 15:41:38.442560 containerd[1510]: time="2025-02-13T15:41:38.440804254Z" level=info msg="Container to stop \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:38.442560 containerd[1510]: time="2025-02-13T15:41:38.440885098Z" level=info msg="Container to stop \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:38.442560 containerd[1510]: time="2025-02-13T15:41:38.440920189Z" level=info msg="Container to stop \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:38.442560 containerd[1510]: time="2025-02-13T15:41:38.440935603Z" level=info msg="Container to stop \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:38.442560 containerd[1510]: time="2025-02-13T15:41:38.440952051Z" level=info msg="Container to stop \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:41:38.452616 systemd[1]: cri-containerd-c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a.scope: Deactivated successfully. Feb 13 15:41:38.465994 containerd[1510]: time="2025-02-13T15:41:38.465446110Z" level=info msg="shim disconnected" id=307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd namespace=k8s.io Feb 13 15:41:38.466288 containerd[1510]: time="2025-02-13T15:41:38.466258490Z" level=warning msg="cleaning up after shim disconnected" id=307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd namespace=k8s.io Feb 13 15:41:38.466411 containerd[1510]: time="2025-02-13T15:41:38.466392330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:38.497098 containerd[1510]: time="2025-02-13T15:41:38.496999614Z" level=info msg="shim disconnected" id=c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a namespace=k8s.io Feb 13 15:41:38.497432 containerd[1510]: time="2025-02-13T15:41:38.497404812Z" level=warning msg="cleaning up after shim disconnected" id=c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a namespace=k8s.io Feb 13 15:41:38.497708 containerd[1510]: time="2025-02-13T15:41:38.497634231Z" level=info msg="TearDown network for sandbox \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\" successfully" Feb 13 15:41:38.497811 containerd[1510]: time="2025-02-13T15:41:38.497703040Z" level=info msg="StopPodSandbox for \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\" returns successfully" Feb 13 15:41:38.497872 containerd[1510]: time="2025-02-13T15:41:38.497639157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:38.531730 containerd[1510]: time="2025-02-13T15:41:38.531667652Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:41:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:41:38.535816 containerd[1510]: time="2025-02-13T15:41:38.535762369Z" level=info msg="TearDown network for sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" successfully" Feb 13 15:41:38.535982 containerd[1510]: time="2025-02-13T15:41:38.535835741Z" level=info msg="StopPodSandbox for \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" returns successfully" Feb 13 15:41:38.616468 kubelet[2759]: I0213 15:41:38.616399 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spjx6\" (UniqueName: \"kubernetes.io/projected/5f019a3c-78d7-446e-b90e-374c66c00c1b-kube-api-access-spjx6\") pod \"5f019a3c-78d7-446e-b90e-374c66c00c1b\" (UID: \"5f019a3c-78d7-446e-b90e-374c66c00c1b\") " Feb 13 15:41:38.616468 kubelet[2759]: I0213 15:41:38.616466 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f019a3c-78d7-446e-b90e-374c66c00c1b-cilium-config-path\") pod \"5f019a3c-78d7-446e-b90e-374c66c00c1b\" (UID: \"5f019a3c-78d7-446e-b90e-374c66c00c1b\") " Feb 13 15:41:38.620252 kubelet[2759]: I0213 15:41:38.620081 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f019a3c-78d7-446e-b90e-374c66c00c1b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5f019a3c-78d7-446e-b90e-374c66c00c1b" (UID: "5f019a3c-78d7-446e-b90e-374c66c00c1b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:41:38.621869 kubelet[2759]: I0213 15:41:38.621808 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f019a3c-78d7-446e-b90e-374c66c00c1b-kube-api-access-spjx6" (OuterVolumeSpecName: "kube-api-access-spjx6") pod "5f019a3c-78d7-446e-b90e-374c66c00c1b" (UID: "5f019a3c-78d7-446e-b90e-374c66c00c1b"). InnerVolumeSpecName "kube-api-access-spjx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:41:38.718676 kubelet[2759]: I0213 15:41:38.716944 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-host-proc-sys-kernel\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.718676 kubelet[2759]: I0213 15:41:38.717006 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-bpf-maps\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.718676 kubelet[2759]: I0213 15:41:38.717038 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-lib-modules\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.718676 kubelet[2759]: I0213 15:41:38.717038 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.718676 kubelet[2759]: I0213 15:41:38.717070 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49494233-ebba-4fb4-8f94-c8a738cc8d98-clustermesh-secrets\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.718676 kubelet[2759]: I0213 15:41:38.717095 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-xtables-lock\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719136 kubelet[2759]: I0213 15:41:38.717096 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.719136 kubelet[2759]: I0213 15:41:38.717122 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-hostproc\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719136 kubelet[2759]: I0213 15:41:38.717152 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-config-path\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719136 kubelet[2759]: I0213 15:41:38.717179 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-cgroup\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719136 kubelet[2759]: I0213 15:41:38.717206 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8sp9\" (UniqueName: \"kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-kube-api-access-x8sp9\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719136 kubelet[2759]: I0213 15:41:38.717234 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-run\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719448 kubelet[2759]: I0213 15:41:38.717259 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-etc-cni-netd\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719448 kubelet[2759]: I0213 15:41:38.717285 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cni-path\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719448 kubelet[2759]: I0213 15:41:38.717309 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-host-proc-sys-net\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719448 kubelet[2759]: I0213 15:41:38.717343 2759 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-hubble-tls\") pod \"49494233-ebba-4fb4-8f94-c8a738cc8d98\" (UID: \"49494233-ebba-4fb4-8f94-c8a738cc8d98\") " Feb 13 15:41:38.719448 kubelet[2759]: I0213 15:41:38.717417 2759 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-host-proc-sys-kernel\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.719448 kubelet[2759]: I0213 15:41:38.717437 2759 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-bpf-maps\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.719808 kubelet[2759]: I0213 15:41:38.717476 2759 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-spjx6\" (UniqueName: \"kubernetes.io/projected/5f019a3c-78d7-446e-b90e-374c66c00c1b-kube-api-access-spjx6\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.719808 kubelet[2759]: I0213 15:41:38.717495 2759 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f019a3c-78d7-446e-b90e-374c66c00c1b-cilium-config-path\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.719808 kubelet[2759]: I0213 15:41:38.717124 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.719808 kubelet[2759]: I0213 15:41:38.719490 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.719808 kubelet[2759]: I0213 15:41:38.719566 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.720077 kubelet[2759]: I0213 15:41:38.719584 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-hostproc" (OuterVolumeSpecName: "hostproc") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.721533 kubelet[2759]: I0213 15:41:38.720345 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.721978 kubelet[2759]: I0213 15:41:38.721930 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cni-path" (OuterVolumeSpecName: "cni-path") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.722087 kubelet[2759]: I0213 15:41:38.721994 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.722087 kubelet[2759]: I0213 15:41:38.722074 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:41:38.724583 kubelet[2759]: I0213 15:41:38.724544 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:41:38.727292 kubelet[2759]: I0213 15:41:38.727123 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-kube-api-access-x8sp9" (OuterVolumeSpecName: "kube-api-access-x8sp9") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "kube-api-access-x8sp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:41:38.727852 kubelet[2759]: I0213 15:41:38.727815 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49494233-ebba-4fb4-8f94-c8a738cc8d98-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:41:38.728919 kubelet[2759]: I0213 15:41:38.728873 2759 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "49494233-ebba-4fb4-8f94-c8a738cc8d98" (UID: "49494233-ebba-4fb4-8f94-c8a738cc8d98"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:41:38.759767 systemd[1]: Removed slice kubepods-burstable-pod49494233_ebba_4fb4_8f94_c8a738cc8d98.slice - libcontainer container kubepods-burstable-pod49494233_ebba_4fb4_8f94_c8a738cc8d98.slice. Feb 13 15:41:38.760205 systemd[1]: kubepods-burstable-pod49494233_ebba_4fb4_8f94_c8a738cc8d98.slice: Consumed 10.047s CPU time, 126.7M memory peak, 136K read from disk, 13.3M written to disk. Feb 13 15:41:38.762341 systemd[1]: Removed slice kubepods-besteffort-pod5f019a3c_78d7_446e_b90e_374c66c00c1b.slice - libcontainer container kubepods-besteffort-pod5f019a3c_78d7_446e_b90e_374c66c00c1b.slice. Feb 13 15:41:38.818448 kubelet[2759]: I0213 15:41:38.818359 2759 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cni-path\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818448 kubelet[2759]: I0213 15:41:38.818423 2759 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-host-proc-sys-net\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818448 kubelet[2759]: I0213 15:41:38.818463 2759 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-hubble-tls\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818767 kubelet[2759]: I0213 15:41:38.818479 2759 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-etc-cni-netd\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818767 kubelet[2759]: I0213 15:41:38.818511 2759 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-lib-modules\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818767 kubelet[2759]: I0213 15:41:38.818527 2759 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49494233-ebba-4fb4-8f94-c8a738cc8d98-clustermesh-secrets\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818767 kubelet[2759]: I0213 15:41:38.818541 2759 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-xtables-lock\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818767 kubelet[2759]: I0213 15:41:38.818557 2759 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-hostproc\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818767 kubelet[2759]: I0213 15:41:38.818572 2759 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-config-path\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818767 kubelet[2759]: I0213 15:41:38.818586 2759 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-cgroup\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818984 kubelet[2759]: I0213 15:41:38.818601 2759 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-x8sp9\" (UniqueName: \"kubernetes.io/projected/49494233-ebba-4fb4-8f94-c8a738cc8d98-kube-api-access-x8sp9\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:38.818984 kubelet[2759]: I0213 15:41:38.818626 2759 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49494233-ebba-4fb4-8f94-c8a738cc8d98-cilium-run\") on node \"ci-4230-0-1-4aea9644842ac74b5473.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 15:41:39.198121 kubelet[2759]: I0213 15:41:39.196809 2759 scope.go:117] "RemoveContainer" containerID="dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807" Feb 13 15:41:39.203046 containerd[1510]: time="2025-02-13T15:41:39.202862464Z" level=info msg="RemoveContainer for \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\"" Feb 13 15:41:39.214445 containerd[1510]: time="2025-02-13T15:41:39.214376329Z" level=info msg="RemoveContainer for \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\" returns successfully" Feb 13 15:41:39.215702 kubelet[2759]: I0213 15:41:39.215092 2759 scope.go:117] "RemoveContainer" containerID="dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807" Feb 13 15:41:39.216578 containerd[1510]: time="2025-02-13T15:41:39.216524912Z" level=error msg="ContainerStatus for \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\": not found" Feb 13 15:41:39.216764 kubelet[2759]: E0213 15:41:39.216731 2759 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\": not found" containerID="dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807" Feb 13 15:41:39.216902 kubelet[2759]: I0213 15:41:39.216781 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807"} err="failed to get container status \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\": rpc error: code = NotFound desc = an error occurred when try to find container \"dea0e747a893130d634a5c04999676a0be48ca37267e3c2ed7606126da307807\": not found" Feb 13 15:41:39.216966 kubelet[2759]: I0213 15:41:39.216910 2759 scope.go:117] "RemoveContainer" containerID="34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483" Feb 13 15:41:39.220269 containerd[1510]: time="2025-02-13T15:41:39.220230745Z" level=info msg="RemoveContainer for \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\"" Feb 13 15:41:39.225592 containerd[1510]: time="2025-02-13T15:41:39.225363064Z" level=info msg="RemoveContainer for \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\" returns successfully" Feb 13 15:41:39.226230 kubelet[2759]: I0213 15:41:39.226025 2759 scope.go:117] "RemoveContainer" containerID="ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05" Feb 13 15:41:39.227730 containerd[1510]: time="2025-02-13T15:41:39.227695852Z" level=info msg="RemoveContainer for \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\"" Feb 13 15:41:39.234586 containerd[1510]: time="2025-02-13T15:41:39.234429151Z" level=info msg="RemoveContainer for \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\" returns successfully" Feb 13 15:41:39.235281 kubelet[2759]: I0213 15:41:39.235248 2759 scope.go:117] "RemoveContainer" containerID="3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e" Feb 13 15:41:39.238310 containerd[1510]: time="2025-02-13T15:41:39.237898344Z" level=info msg="RemoveContainer for \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\"" Feb 13 15:41:39.242046 containerd[1510]: time="2025-02-13T15:41:39.241994456Z" level=info msg="RemoveContainer for \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\" returns successfully" Feb 13 15:41:39.242403 kubelet[2759]: I0213 15:41:39.242304 2759 scope.go:117] "RemoveContainer" containerID="4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d" Feb 13 15:41:39.243934 containerd[1510]: time="2025-02-13T15:41:39.243858757Z" level=info msg="RemoveContainer for \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\"" Feb 13 15:41:39.248641 containerd[1510]: time="2025-02-13T15:41:39.248597547Z" level=info msg="RemoveContainer for \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\" returns successfully" Feb 13 15:41:39.249534 kubelet[2759]: I0213 15:41:39.248948 2759 scope.go:117] "RemoveContainer" containerID="17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602" Feb 13 15:41:39.251622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd-rootfs.mount: Deactivated successfully. Feb 13 15:41:39.251985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a-rootfs.mount: Deactivated successfully. Feb 13 15:41:39.252130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a-shm.mount: Deactivated successfully. Feb 13 15:41:39.252269 systemd[1]: var-lib-kubelet-pods-49494233\x2debba\x2d4fb4\x2d8f94\x2dc8a738cc8d98-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx8sp9.mount: Deactivated successfully. Feb 13 15:41:39.252393 systemd[1]: var-lib-kubelet-pods-5f019a3c\x2d78d7\x2d446e\x2db90e\x2d374c66c00c1b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dspjx6.mount: Deactivated successfully. Feb 13 15:41:39.253143 systemd[1]: var-lib-kubelet-pods-49494233\x2debba\x2d4fb4\x2d8f94\x2dc8a738cc8d98-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:41:39.254713 containerd[1510]: time="2025-02-13T15:41:39.254075515Z" level=info msg="RemoveContainer for \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\"" Feb 13 15:41:39.253316 systemd[1]: var-lib-kubelet-pods-49494233\x2debba\x2d4fb4\x2d8f94\x2dc8a738cc8d98-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:41:39.259993 containerd[1510]: time="2025-02-13T15:41:39.259934301Z" level=info msg="RemoveContainer for \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\" returns successfully" Feb 13 15:41:39.260254 kubelet[2759]: I0213 15:41:39.260148 2759 scope.go:117] "RemoveContainer" containerID="34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483" Feb 13 15:41:39.260473 containerd[1510]: time="2025-02-13T15:41:39.260404583Z" level=error msg="ContainerStatus for \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\": not found" Feb 13 15:41:39.260824 kubelet[2759]: E0213 15:41:39.260662 2759 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\": not found" containerID="34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483" Feb 13 15:41:39.260824 kubelet[2759]: I0213 15:41:39.260703 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483"} err="failed to get container status \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\": rpc error: code = NotFound desc = an error occurred when try to find container \"34c908290705e200cabe7e4c26719b0024162cbce5d7107627495a75f3047483\": not found" Feb 13 15:41:39.260824 kubelet[2759]: I0213 15:41:39.260735 2759 scope.go:117] "RemoveContainer" containerID="ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05" Feb 13 15:41:39.261206 containerd[1510]: time="2025-02-13T15:41:39.261154530Z" level=error msg="ContainerStatus for \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\": not found" Feb 13 15:41:39.261484 kubelet[2759]: E0213 15:41:39.261432 2759 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\": not found" containerID="ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05" Feb 13 15:41:39.261484 kubelet[2759]: I0213 15:41:39.261470 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05"} err="failed to get container status \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad93ded5cb63ce6f7e07589975d37ead1308db4f600ca2762002079b98f39a05\": not found" Feb 13 15:41:39.261698 kubelet[2759]: I0213 15:41:39.261516 2759 scope.go:117] "RemoveContainer" containerID="3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e" Feb 13 15:41:39.261869 containerd[1510]: time="2025-02-13T15:41:39.261808628Z" level=error msg="ContainerStatus for \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\": not found" Feb 13 15:41:39.262012 kubelet[2759]: E0213 15:41:39.261981 2759 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\": not found" containerID="3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e" Feb 13 15:41:39.262106 kubelet[2759]: I0213 15:41:39.262026 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e"} err="failed to get container status \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fa114680000057c51d94aa4683278019d6bf39393186825a2e5370dd9c7108e\": not found" Feb 13 15:41:39.262106 kubelet[2759]: I0213 15:41:39.262051 2759 scope.go:117] "RemoveContainer" containerID="4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d" Feb 13 15:41:39.262301 containerd[1510]: time="2025-02-13T15:41:39.262248395Z" level=error msg="ContainerStatus for \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\": not found" Feb 13 15:41:39.262578 kubelet[2759]: E0213 15:41:39.262530 2759 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\": not found" containerID="4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d" Feb 13 15:41:39.262671 kubelet[2759]: I0213 15:41:39.262598 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d"} err="failed to get container status \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4bd494c62ad66e05e4cdc01ff1b61ced5500247f56da03f34da8775d24fe182d\": not found" Feb 13 15:41:39.262671 kubelet[2759]: I0213 15:41:39.262624 2759 scope.go:117] "RemoveContainer" containerID="17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602" Feb 13 15:41:39.263075 kubelet[2759]: E0213 15:41:39.263007 2759 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\": not found" containerID="17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602" Feb 13 15:41:39.263075 kubelet[2759]: I0213 15:41:39.263041 2759 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602"} err="failed to get container status \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\": rpc error: code = NotFound desc = an error occurred when try to find container \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\": not found" Feb 13 15:41:39.263202 containerd[1510]: time="2025-02-13T15:41:39.262829153Z" level=error msg="ContainerStatus for \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17fd6bfbdbc13e9d2ce0c30a6b12136d1c9e77017f9e457af3eb408c4ba2f602\": not found" Feb 13 15:41:40.211738 sshd[4357]: Connection closed by 139.178.68.195 port 41882 Feb 13 15:41:40.212235 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:40.218990 systemd[1]: sshd@24-10.128.0.113:22-139.178.68.195:41882.service: Deactivated successfully. Feb 13 15:41:40.223789 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:41:40.228212 systemd-logind[1498]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:41:40.230487 systemd-logind[1498]: Removed session 25. Feb 13 15:41:40.272957 systemd[1]: Started sshd@25-10.128.0.113:22-139.178.68.195:53380.service - OpenSSH per-connection server daemon (139.178.68.195:53380). Feb 13 15:41:40.573418 sshd[4521]: Accepted publickey for core from 139.178.68.195 port 53380 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:40.575356 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:40.584586 systemd-logind[1498]: New session 26 of user core. Feb 13 15:41:40.590763 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:41:40.737368 ntpd[1480]: Deleting interface #12 lxc_health, fe80::f885:44ff:fedf:e77d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Feb 13 15:41:40.737986 ntpd[1480]: 13 Feb 15:41:40 ntpd[1480]: Deleting interface #12 lxc_health, fe80::f885:44ff:fedf:e77d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Feb 13 15:41:40.754964 kubelet[2759]: I0213 15:41:40.754909 2759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49494233-ebba-4fb4-8f94-c8a738cc8d98" path="/var/lib/kubelet/pods/49494233-ebba-4fb4-8f94-c8a738cc8d98/volumes" Feb 13 15:41:40.756226 kubelet[2759]: I0213 15:41:40.756183 2759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f019a3c-78d7-446e-b90e-374c66c00c1b" path="/var/lib/kubelet/pods/5f019a3c-78d7-446e-b90e-374c66c00c1b/volumes" Feb 13 15:41:41.502412 kubelet[2759]: I0213 15:41:41.502336 2759 topology_manager.go:215] "Topology Admit Handler" podUID="ec26ce2b-5aa0-448e-a43a-f51f30fcd24b" podNamespace="kube-system" podName="cilium-vvnp6" Feb 13 15:41:41.502638 kubelet[2759]: E0213 15:41:41.502435 2759 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49494233-ebba-4fb4-8f94-c8a738cc8d98" containerName="apply-sysctl-overwrites" Feb 13 15:41:41.502638 kubelet[2759]: E0213 15:41:41.502453 2759 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49494233-ebba-4fb4-8f94-c8a738cc8d98" containerName="mount-bpf-fs" Feb 13 15:41:41.502638 kubelet[2759]: E0213 15:41:41.502464 2759 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49494233-ebba-4fb4-8f94-c8a738cc8d98" containerName="clean-cilium-state" Feb 13 15:41:41.502638 kubelet[2759]: E0213 15:41:41.502474 2759 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49494233-ebba-4fb4-8f94-c8a738cc8d98" containerName="cilium-agent" Feb 13 15:41:41.502638 kubelet[2759]: E0213 15:41:41.502484 2759 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f019a3c-78d7-446e-b90e-374c66c00c1b" containerName="cilium-operator" Feb 13 15:41:41.502638 kubelet[2759]: E0213 15:41:41.502495 2759 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49494233-ebba-4fb4-8f94-c8a738cc8d98" containerName="mount-cgroup" Feb 13 15:41:41.502638 kubelet[2759]: I0213 15:41:41.502549 2759 memory_manager.go:354] "RemoveStaleState removing state" podUID="49494233-ebba-4fb4-8f94-c8a738cc8d98" containerName="cilium-agent" Feb 13 15:41:41.502638 kubelet[2759]: I0213 15:41:41.502560 2759 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f019a3c-78d7-446e-b90e-374c66c00c1b" containerName="cilium-operator" Feb 13 15:41:41.517347 systemd[1]: Created slice kubepods-burstable-podec26ce2b_5aa0_448e_a43a_f51f30fcd24b.slice - libcontainer container kubepods-burstable-podec26ce2b_5aa0_448e_a43a_f51f30fcd24b.slice. Feb 13 15:41:41.523774 sshd[4524]: Connection closed by 139.178.68.195 port 53380 Feb 13 15:41:41.525397 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:41.537286 systemd-logind[1498]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:41:41.538778 systemd[1]: sshd@25-10.128.0.113:22-139.178.68.195:53380.service: Deactivated successfully. Feb 13 15:41:41.549958 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:41:41.555214 systemd-logind[1498]: Removed session 26. Feb 13 15:41:41.588679 systemd[1]: Started sshd@26-10.128.0.113:22-139.178.68.195:53390.service - OpenSSH per-connection server daemon (139.178.68.195:53390). Feb 13 15:41:41.638168 kubelet[2759]: I0213 15:41:41.637754 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-host-proc-sys-kernel\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638168 kubelet[2759]: I0213 15:41:41.637820 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-bpf-maps\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638168 kubelet[2759]: I0213 15:41:41.637879 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-cilium-cgroup\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638168 kubelet[2759]: I0213 15:41:41.637909 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-xtables-lock\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638168 kubelet[2759]: I0213 15:41:41.637945 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-cilium-run\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638168 kubelet[2759]: I0213 15:41:41.637994 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52qfg\" (UniqueName: \"kubernetes.io/projected/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-kube-api-access-52qfg\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638663 kubelet[2759]: I0213 15:41:41.638025 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-cni-path\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638663 kubelet[2759]: I0213 15:41:41.638052 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-lib-modules\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638663 kubelet[2759]: I0213 15:41:41.638080 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-clustermesh-secrets\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638663 kubelet[2759]: I0213 15:41:41.638104 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-hubble-tls\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638663 kubelet[2759]: I0213 15:41:41.638129 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-etc-cni-netd\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638663 kubelet[2759]: I0213 15:41:41.638156 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-cilium-config-path\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638945 kubelet[2759]: I0213 15:41:41.638195 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-cilium-ipsec-secrets\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638945 kubelet[2759]: I0213 15:41:41.638220 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-host-proc-sys-net\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.638945 kubelet[2759]: I0213 15:41:41.638250 2759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec26ce2b-5aa0-448e-a43a-f51f30fcd24b-hostproc\") pod \"cilium-vvnp6\" (UID: \"ec26ce2b-5aa0-448e-a43a-f51f30fcd24b\") " pod="kube-system/cilium-vvnp6" Feb 13 15:41:41.825954 containerd[1510]: time="2025-02-13T15:41:41.825891913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvnp6,Uid:ec26ce2b-5aa0-448e-a43a-f51f30fcd24b,Namespace:kube-system,Attempt:0,}" Feb 13 15:41:41.871904 containerd[1510]: time="2025-02-13T15:41:41.871703410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:41.871904 containerd[1510]: time="2025-02-13T15:41:41.871808265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:41.873116 containerd[1510]: time="2025-02-13T15:41:41.871880128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:41.873116 containerd[1510]: time="2025-02-13T15:41:41.873044701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:41.902751 systemd[1]: Started cri-containerd-b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe.scope - libcontainer container b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe. Feb 13 15:41:41.916103 sshd[4534]: Accepted publickey for core from 139.178.68.195 port 53390 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:41.919224 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:41.927811 kubelet[2759]: E0213 15:41:41.927645 2759 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:41:41.931864 systemd-logind[1498]: New session 27 of user core. Feb 13 15:41:41.936769 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:41:41.957837 containerd[1510]: time="2025-02-13T15:41:41.957759632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvnp6,Uid:ec26ce2b-5aa0-448e-a43a-f51f30fcd24b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\"" Feb 13 15:41:41.962823 containerd[1510]: time="2025-02-13T15:41:41.962775875Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:41:41.982037 containerd[1510]: time="2025-02-13T15:41:41.981876062Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb5b8c6849250fcb25aeb85838364993c230b1570221e3336a29b9263da8f8ac\"" Feb 13 15:41:41.983843 containerd[1510]: time="2025-02-13T15:41:41.982551264Z" level=info msg="StartContainer for \"eb5b8c6849250fcb25aeb85838364993c230b1570221e3336a29b9263da8f8ac\"" Feb 13 15:41:42.018843 systemd[1]: Started cri-containerd-eb5b8c6849250fcb25aeb85838364993c230b1570221e3336a29b9263da8f8ac.scope - libcontainer container eb5b8c6849250fcb25aeb85838364993c230b1570221e3336a29b9263da8f8ac. Feb 13 15:41:42.072559 containerd[1510]: time="2025-02-13T15:41:42.072478679Z" level=info msg="StartContainer for \"eb5b8c6849250fcb25aeb85838364993c230b1570221e3336a29b9263da8f8ac\" returns successfully" Feb 13 15:41:42.085939 systemd[1]: cri-containerd-eb5b8c6849250fcb25aeb85838364993c230b1570221e3336a29b9263da8f8ac.scope: Deactivated successfully. Feb 13 15:41:42.127561 sshd[4578]: Connection closed by 139.178.68.195 port 53390 Feb 13 15:41:42.132299 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:42.139192 containerd[1510]: time="2025-02-13T15:41:42.139101344Z" level=info msg="shim disconnected" id=eb5b8c6849250fcb25aeb85838364993c230b1570221e3336a29b9263da8f8ac namespace=k8s.io Feb 13 15:41:42.139192 containerd[1510]: time="2025-02-13T15:41:42.139180862Z" level=warning msg="cleaning up after shim disconnected" id=eb5b8c6849250fcb25aeb85838364993c230b1570221e3336a29b9263da8f8ac namespace=k8s.io Feb 13 15:41:42.139192 containerd[1510]: time="2025-02-13T15:41:42.139195592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:42.144475 systemd[1]: sshd@26-10.128.0.113:22-139.178.68.195:53390.service: Deactivated successfully. Feb 13 15:41:42.155695 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:41:42.157850 systemd-logind[1498]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:41:42.166269 systemd-logind[1498]: Removed session 27. Feb 13 15:41:42.173423 containerd[1510]: time="2025-02-13T15:41:42.173360258Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:41:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:41:42.188988 systemd[1]: Started sshd@27-10.128.0.113:22-139.178.68.195:53396.service - OpenSSH per-connection server daemon (139.178.68.195:53396). Feb 13 15:41:42.221368 containerd[1510]: time="2025-02-13T15:41:42.221098016Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:41:42.237639 containerd[1510]: time="2025-02-13T15:41:42.237581369Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a1634ea166c7ffd6f63f23a1a4dedab1660bd17ca6441bcc4c484a32fb2134e9\"" Feb 13 15:41:42.238646 containerd[1510]: time="2025-02-13T15:41:42.238609486Z" level=info msg="StartContainer for \"a1634ea166c7ffd6f63f23a1a4dedab1660bd17ca6441bcc4c484a32fb2134e9\"" Feb 13 15:41:42.282788 systemd[1]: Started cri-containerd-a1634ea166c7ffd6f63f23a1a4dedab1660bd17ca6441bcc4c484a32fb2134e9.scope - libcontainer container a1634ea166c7ffd6f63f23a1a4dedab1660bd17ca6441bcc4c484a32fb2134e9. Feb 13 15:41:42.324067 containerd[1510]: time="2025-02-13T15:41:42.323455465Z" level=info msg="StartContainer for \"a1634ea166c7ffd6f63f23a1a4dedab1660bd17ca6441bcc4c484a32fb2134e9\" returns successfully" Feb 13 15:41:42.334325 systemd[1]: cri-containerd-a1634ea166c7ffd6f63f23a1a4dedab1660bd17ca6441bcc4c484a32fb2134e9.scope: Deactivated successfully. Feb 13 15:41:42.374866 containerd[1510]: time="2025-02-13T15:41:42.374692591Z" level=info msg="shim disconnected" id=a1634ea166c7ffd6f63f23a1a4dedab1660bd17ca6441bcc4c484a32fb2134e9 namespace=k8s.io Feb 13 15:41:42.374866 containerd[1510]: time="2025-02-13T15:41:42.374768328Z" level=warning msg="cleaning up after shim disconnected" id=a1634ea166c7ffd6f63f23a1a4dedab1660bd17ca6441bcc4c484a32fb2134e9 namespace=k8s.io Feb 13 15:41:42.374866 containerd[1510]: time="2025-02-13T15:41:42.374783323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:42.482111 sshd[4650]: Accepted publickey for core from 139.178.68.195 port 53396 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:42.484125 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:42.491411 systemd-logind[1498]: New session 28 of user core. Feb 13 15:41:42.496766 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:41:43.226116 containerd[1510]: time="2025-02-13T15:41:43.226037881Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:41:43.254257 containerd[1510]: time="2025-02-13T15:41:43.251623637Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836\"" Feb 13 15:41:43.254257 containerd[1510]: time="2025-02-13T15:41:43.253232505Z" level=info msg="StartContainer for \"8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836\"" Feb 13 15:41:43.308785 systemd[1]: Started cri-containerd-8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836.scope - libcontainer container 8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836. Feb 13 15:41:43.354569 containerd[1510]: time="2025-02-13T15:41:43.354472307Z" level=info msg="StartContainer for \"8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836\" returns successfully" Feb 13 15:41:43.359636 systemd[1]: cri-containerd-8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836.scope: Deactivated successfully. Feb 13 15:41:43.393739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836-rootfs.mount: Deactivated successfully. Feb 13 15:41:43.398388 containerd[1510]: time="2025-02-13T15:41:43.398301424Z" level=info msg="shim disconnected" id=8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836 namespace=k8s.io Feb 13 15:41:43.398388 containerd[1510]: time="2025-02-13T15:41:43.398378234Z" level=warning msg="cleaning up after shim disconnected" id=8574e8c6054522830fb14edae9c5d086f1df261f5d5bfacec8d6df42cb4dc836 namespace=k8s.io Feb 13 15:41:43.398388 containerd[1510]: time="2025-02-13T15:41:43.398392566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:44.235099 containerd[1510]: time="2025-02-13T15:41:44.234425840Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:41:44.262715 containerd[1510]: time="2025-02-13T15:41:44.262641741Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f\"" Feb 13 15:41:44.264042 containerd[1510]: time="2025-02-13T15:41:44.263978113Z" level=info msg="StartContainer for \"1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f\"" Feb 13 15:41:44.269321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292783950.mount: Deactivated successfully. Feb 13 15:41:44.327104 systemd[1]: Started cri-containerd-1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f.scope - libcontainer container 1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f. Feb 13 15:41:44.382875 systemd[1]: cri-containerd-1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f.scope: Deactivated successfully. Feb 13 15:41:44.387765 containerd[1510]: time="2025-02-13T15:41:44.387610624Z" level=info msg="StartContainer for \"1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f\" returns successfully" Feb 13 15:41:44.422328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f-rootfs.mount: Deactivated successfully. Feb 13 15:41:44.425153 containerd[1510]: time="2025-02-13T15:41:44.424915154Z" level=info msg="shim disconnected" id=1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f namespace=k8s.io Feb 13 15:41:44.425153 containerd[1510]: time="2025-02-13T15:41:44.425150617Z" level=warning msg="cleaning up after shim disconnected" id=1fb0a3f9e06d103535bb0517901362bd44a1651158f2cdba49a1ec95825f667f namespace=k8s.io Feb 13 15:41:44.425933 containerd[1510]: time="2025-02-13T15:41:44.425172166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:45.238464 containerd[1510]: time="2025-02-13T15:41:45.238383121Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:41:45.272909 containerd[1510]: time="2025-02-13T15:41:45.269577397Z" level=info msg="CreateContainer within sandbox \"b3ffda427ead2ab9f8929aa0d70c9bc26a5fd421ce4271f1d96a118ebd932abe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"27f9d7dbf0a197d7ecc9c1291bef3298611eb8b0fbb9f12a78404988576840b8\"" Feb 13 15:41:45.271038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427737265.mount: Deactivated successfully. Feb 13 15:41:45.275405 containerd[1510]: time="2025-02-13T15:41:45.273996646Z" level=info msg="StartContainer for \"27f9d7dbf0a197d7ecc9c1291bef3298611eb8b0fbb9f12a78404988576840b8\"" Feb 13 15:41:45.331769 systemd[1]: Started cri-containerd-27f9d7dbf0a197d7ecc9c1291bef3298611eb8b0fbb9f12a78404988576840b8.scope - libcontainer container 27f9d7dbf0a197d7ecc9c1291bef3298611eb8b0fbb9f12a78404988576840b8. Feb 13 15:41:45.392214 containerd[1510]: time="2025-02-13T15:41:45.392150042Z" level=info msg="StartContainer for \"27f9d7dbf0a197d7ecc9c1291bef3298611eb8b0fbb9f12a78404988576840b8\" returns successfully" Feb 13 15:41:45.908556 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:41:46.258874 kubelet[2759]: I0213 15:41:46.258213 2759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vvnp6" podStartSLOduration=5.258187649 podStartE2EDuration="5.258187649s" podCreationTimestamp="2025-02-13 15:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:41:46.257634566 +0000 UTC m=+119.932154581" watchObservedRunningTime="2025-02-13 15:41:46.258187649 +0000 UTC m=+119.932707665" Feb 13 15:41:46.733551 containerd[1510]: time="2025-02-13T15:41:46.733471759Z" level=info msg="StopPodSandbox for \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\"" Feb 13 15:41:46.734314 containerd[1510]: time="2025-02-13T15:41:46.733680684Z" level=info msg="TearDown network for sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" successfully" Feb 13 15:41:46.734314 containerd[1510]: time="2025-02-13T15:41:46.733703850Z" level=info msg="StopPodSandbox for \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" returns successfully" Feb 13 15:41:46.734457 containerd[1510]: time="2025-02-13T15:41:46.734354574Z" level=info msg="RemovePodSandbox for \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\"" Feb 13 15:41:46.734457 containerd[1510]: time="2025-02-13T15:41:46.734395784Z" level=info msg="Forcibly stopping sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\"" Feb 13 15:41:46.734641 containerd[1510]: time="2025-02-13T15:41:46.734481844Z" level=info msg="TearDown network for sandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" successfully" Feb 13 15:41:46.739470 containerd[1510]: time="2025-02-13T15:41:46.739405750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:46.739721 containerd[1510]: time="2025-02-13T15:41:46.739535604Z" level=info msg="RemovePodSandbox \"c76d379109a56392770b56ebc5d6df3980be9f108a7078a5e8ae47046cd68f4a\" returns successfully" Feb 13 15:41:46.740940 containerd[1510]: time="2025-02-13T15:41:46.740228821Z" level=info msg="StopPodSandbox for \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\"" Feb 13 15:41:46.740940 containerd[1510]: time="2025-02-13T15:41:46.740349153Z" level=info msg="TearDown network for sandbox \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\" successfully" Feb 13 15:41:46.740940 containerd[1510]: time="2025-02-13T15:41:46.740370547Z" level=info msg="StopPodSandbox for \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\" returns successfully" Feb 13 15:41:46.740940 containerd[1510]: time="2025-02-13T15:41:46.740904584Z" level=info msg="RemovePodSandbox for \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\"" Feb 13 15:41:46.741189 containerd[1510]: time="2025-02-13T15:41:46.740943721Z" level=info msg="Forcibly stopping sandbox \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\"" Feb 13 15:41:46.741189 containerd[1510]: time="2025-02-13T15:41:46.741024760Z" level=info msg="TearDown network for sandbox \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\" successfully" Feb 13 15:41:46.746002 containerd[1510]: time="2025-02-13T15:41:46.745935595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:41:46.746185 containerd[1510]: time="2025-02-13T15:41:46.746023473Z" level=info msg="RemovePodSandbox \"307b91ec49ce277be1f258571e500dfc8203425fc109e62c8af710f73938f9cd\" returns successfully" Feb 13 15:41:49.252303 systemd-networkd[1402]: lxc_health: Link UP Feb 13 15:41:49.257802 systemd-networkd[1402]: lxc_health: Gained carrier Feb 13 15:41:50.963963 systemd-networkd[1402]: lxc_health: Gained IPv6LL Feb 13 15:41:53.737590 ntpd[1480]: Listen normally on 15 lxc_health [fe80::3082:79ff:fe65:e71c%14]:123 Feb 13 15:41:53.738279 ntpd[1480]: 13 Feb 15:41:53 ntpd[1480]: Listen normally on 15 lxc_health [fe80::3082:79ff:fe65:e71c%14]:123 Feb 13 15:41:53.997801 systemd[1]: run-containerd-runc-k8s.io-27f9d7dbf0a197d7ecc9c1291bef3298611eb8b0fbb9f12a78404988576840b8-runc.ABqSkF.mount: Deactivated successfully. Feb 13 15:41:56.366466 sshd[4713]: Connection closed by 139.178.68.195 port 53396 Feb 13 15:41:56.367805 sshd-session[4650]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:56.375288 systemd[1]: sshd@27-10.128.0.113:22-139.178.68.195:53396.service: Deactivated successfully. Feb 13 15:41:56.379019 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:41:56.380669 systemd-logind[1498]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:41:56.382405 systemd-logind[1498]: Removed session 28.