Jan 29 16:23:02.138594 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:23:02.138644 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:23:02.138663 kernel: BIOS-provided physical RAM map: Jan 29 16:23:02.138678 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 29 16:23:02.138700 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 29 16:23:02.138714 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 29 16:23:02.138732 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 29 16:23:02.138748 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 29 16:23:02.138767 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd325fff] usable Jan 29 16:23:02.138782 kernel: BIOS-e820: [mem 0x00000000bd326000-0x00000000bd32dfff] ACPI data Jan 29 16:23:02.138797 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Jan 29 16:23:02.138812 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 29 16:23:02.138827 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 29 16:23:02.138842 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 29 16:23:02.138864 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 29 16:23:02.138881 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 29 16:23:02.138898 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 29 16:23:02.138914 kernel: NX (Execute Disable) protection: active Jan 29 16:23:02.138931 kernel: APIC: Static calls initialized Jan 29 16:23:02.138946 kernel: efi: EFI v2.7 by EDK II Jan 29 16:23:02.138964 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd326018 Jan 29 16:23:02.138980 kernel: random: crng init done Jan 29 16:23:02.138997 kernel: secureboot: Secure boot disabled Jan 29 16:23:02.139013 kernel: SMBIOS 2.4 present. Jan 29 16:23:02.139034 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 29 16:23:02.139050 kernel: Hypervisor detected: KVM Jan 29 16:23:02.139066 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:23:02.139083 kernel: kvm-clock: using sched offset of 13380424439 cycles Jan 29 16:23:02.139101 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:23:02.139119 kernel: tsc: Detected 2299.998 MHz processor Jan 29 16:23:02.139136 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:23:02.139153 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:23:02.139170 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 29 16:23:02.139188 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 29 16:23:02.139209 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:23:02.139226 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 29 16:23:02.139242 kernel: Using GB pages for direct mapping Jan 29 16:23:02.139260 kernel: ACPI: Early table checksum verification disabled Jan 29 16:23:02.139276 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 29 16:23:02.139294 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 29 16:23:02.139318 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 29 16:23:02.139339 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 29 16:23:02.139357 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 29 16:23:02.139374 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 29 16:23:02.139393 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 29 16:23:02.139412 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 29 16:23:02.139429 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 29 16:23:02.139447 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 29 16:23:02.139469 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 29 16:23:02.139487 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 29 16:23:02.139505 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 29 16:23:02.139523 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 29 16:23:02.139541 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 29 16:23:02.139559 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 29 16:23:02.139641 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 29 16:23:02.139659 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 29 16:23:02.139677 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 29 16:23:02.139716 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 29 16:23:02.139733 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:23:02.139751 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 16:23:02.139769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 16:23:02.139787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 29 16:23:02.139806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 29 16:23:02.139824 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 29 16:23:02.139842 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 29 16:23:02.139861 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 29 16:23:02.139882 kernel: Zone ranges: Jan 29 16:23:02.139900 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:23:02.139918 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 16:23:02.139935 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 29 16:23:02.139953 kernel: Movable zone start for each node Jan 29 16:23:02.139971 kernel: Early memory node ranges Jan 29 16:23:02.139989 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 29 16:23:02.140007 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 29 16:23:02.140025 kernel: node 0: [mem 0x0000000000100000-0x00000000bd325fff] Jan 29 16:23:02.140046 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Jan 29 16:23:02.140063 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 29 16:23:02.140081 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 29 16:23:02.140099 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 29 16:23:02.140117 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:23:02.140135 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 29 16:23:02.140153 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 29 16:23:02.140170 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jan 29 16:23:02.140189 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 16:23:02.140211 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 29 16:23:02.140228 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 16:23:02.140246 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:23:02.140263 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:23:02.140281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:23:02.140299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:23:02.140317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:23:02.140335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:23:02.140353 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:23:02.140374 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:23:02.140392 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 16:23:02.140410 kernel: Booting paravirtualized kernel on KVM Jan 29 16:23:02.140428 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:23:02.140447 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:23:02.140465 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:23:02.140482 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:23:02.140500 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:23:02.140518 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:23:02.140539 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:23:02.142212 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:23:02.142250 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:23:02.142269 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 16:23:02.142287 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:23:02.142304 kernel: Fallback order for Node 0: 0 Jan 29 16:23:02.142322 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Jan 29 16:23:02.142339 kernel: Policy zone: Normal Jan 29 16:23:02.142364 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:23:02.142381 kernel: software IO TLB: area num 2. Jan 29 16:23:02.142399 kernel: Memory: 7511308K/7860552K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 348988K reserved, 0K cma-reserved) Jan 29 16:23:02.142417 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:23:02.142434 kernel: Kernel/User page tables isolation: enabled Jan 29 16:23:02.142452 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:23:02.142469 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:23:02.142487 kernel: Dynamic Preempt: voluntary Jan 29 16:23:02.142521 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:23:02.142540 kernel: rcu: RCU event tracing is enabled. Jan 29 16:23:02.142579 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:23:02.142599 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:23:02.142622 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:23:02.142640 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:23:02.142659 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:23:02.142677 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:23:02.142705 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:23:02.142727 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:23:02.142745 kernel: Console: colour dummy device 80x25 Jan 29 16:23:02.142763 kernel: printk: console [ttyS0] enabled Jan 29 16:23:02.142782 kernel: ACPI: Core revision 20230628 Jan 29 16:23:02.142800 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:23:02.142818 kernel: x2apic enabled Jan 29 16:23:02.142837 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:23:02.142855 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 29 16:23:02.142874 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 16:23:02.142896 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 29 16:23:02.142914 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 29 16:23:02.142933 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 29 16:23:02.142952 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:23:02.142971 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 16:23:02.142989 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 16:23:02.143007 kernel: Spectre V2 : Mitigation: IBRS Jan 29 16:23:02.143026 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:23:02.143044 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:23:02.143066 kernel: RETBleed: Mitigation: IBRS Jan 29 16:23:02.143084 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:23:02.143103 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 29 16:23:02.143121 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:23:02.143139 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 16:23:02.143158 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:23:02.143176 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:23:02.143194 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:23:02.143216 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:23:02.143234 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:23:02.143253 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 16:23:02.143271 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:23:02.143289 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:23:02.143307 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:23:02.143326 kernel: landlock: Up and running. Jan 29 16:23:02.143344 kernel: SELinux: Initializing. Jan 29 16:23:02.143362 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:23:02.143384 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:23:02.143403 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 29 16:23:02.143422 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:23:02.143440 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:23:02.143459 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:23:02.143477 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 29 16:23:02.143496 kernel: signal: max sigframe size: 1776 Jan 29 16:23:02.143514 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:23:02.143533 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:23:02.143555 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:23:02.143620 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:23:02.143639 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:23:02.143657 kernel: .... node #0, CPUs: #1 Jan 29 16:23:02.143677 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 16:23:02.143708 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 16:23:02.143727 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:23:02.143745 kernel: smpboot: Max logical packages: 1 Jan 29 16:23:02.143764 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 29 16:23:02.143786 kernel: devtmpfs: initialized Jan 29 16:23:02.143805 kernel: x86/mm: Memory block size: 128MB Jan 29 16:23:02.143823 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 29 16:23:02.143842 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:23:02.143860 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:23:02.143879 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:23:02.143897 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:23:02.143916 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:23:02.143933 kernel: audit: type=2000 audit(1738167780.700:1): state=initialized audit_enabled=0 res=1 Jan 29 16:23:02.143956 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:23:02.143974 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:23:02.143992 kernel: cpuidle: using governor menu Jan 29 16:23:02.144011 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:23:02.144029 kernel: dca service started, version 1.12.1 Jan 29 16:23:02.144047 kernel: PCI: Using configuration type 1 for base access Jan 29 16:23:02.144065 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:23:02.144084 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:23:02.144108 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:23:02.144126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:23:02.144145 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:23:02.144163 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:23:02.144181 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:23:02.144199 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:23:02.144218 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:23:02.144236 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 16:23:02.144255 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:23:02.144272 kernel: ACPI: Interpreter enabled Jan 29 16:23:02.144294 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:23:02.144312 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:23:02.144331 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:23:02.144349 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 16:23:02.144367 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 16:23:02.144385 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:23:02.144933 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:23:02.145596 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 16:23:02.145893 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 16:23:02.145917 kernel: PCI host bridge to bus 0000:00 Jan 29 16:23:02.146100 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:23:02.146268 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:23:02.146432 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:23:02.146612 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 29 16:23:02.146792 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:23:02.146993 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 16:23:02.147186 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 29 16:23:02.147378 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 16:23:02.147574 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 16:23:02.147791 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 29 16:23:02.147986 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 29 16:23:02.148170 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 29 16:23:02.148366 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:23:02.148549 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 29 16:23:02.149834 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 29 16:23:02.150039 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:23:02.150225 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 29 16:23:02.150426 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 29 16:23:02.150447 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:23:02.150465 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:23:02.150483 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:23:02.150500 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:23:02.150517 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 16:23:02.150534 kernel: iommu: Default domain type: Translated Jan 29 16:23:02.150552 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:23:02.150603 kernel: efivars: Registered efivars operations Jan 29 16:23:02.150626 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:23:02.150644 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:23:02.150661 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 29 16:23:02.150679 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 29 16:23:02.150705 kernel: e820: reserve RAM buffer [mem 0xbd326000-0xbfffffff] Jan 29 16:23:02.150723 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 29 16:23:02.150740 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 29 16:23:02.150757 kernel: vgaarb: loaded Jan 29 16:23:02.150774 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:23:02.150796 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:23:02.150814 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:23:02.150832 kernel: pnp: PnP ACPI init Jan 29 16:23:02.150850 kernel: pnp: PnP ACPI: found 7 devices Jan 29 16:23:02.150868 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:23:02.150886 kernel: NET: Registered PF_INET protocol family Jan 29 16:23:02.150904 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:23:02.150922 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 16:23:02.150940 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:23:02.150962 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:23:02.150980 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 16:23:02.150998 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 16:23:02.151017 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 16:23:02.151036 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 16:23:02.151054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:23:02.151072 kernel: NET: Registered PF_XDP protocol family Jan 29 16:23:02.151262 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:23:02.151437 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:23:02.154680 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:23:02.154895 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 29 16:23:02.155095 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:23:02.155122 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:23:02.155141 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 16:23:02.155161 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 29 16:23:02.155181 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:23:02.155209 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 16:23:02.155229 kernel: clocksource: Switched to clocksource tsc Jan 29 16:23:02.155248 kernel: Initialise system trusted keyrings Jan 29 16:23:02.155267 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 16:23:02.155287 kernel: Key type asymmetric registered Jan 29 16:23:02.155306 kernel: Asymmetric key parser 'x509' registered Jan 29 16:23:02.155325 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:23:02.155345 kernel: io scheduler mq-deadline registered Jan 29 16:23:02.155370 kernel: io scheduler kyber registered Jan 29 16:23:02.155389 kernel: io scheduler bfq registered Jan 29 16:23:02.155408 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:23:02.155429 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 16:23:02.155652 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 29 16:23:02.155679 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 29 16:23:02.155876 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 29 16:23:02.155900 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 16:23:02.156083 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 29 16:23:02.156113 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:23:02.156133 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:23:02.156152 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 16:23:02.156171 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 29 16:23:02.156190 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 29 16:23:02.156375 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 29 16:23:02.156400 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:23:02.156419 kernel: i8042: Warning: Keylock active Jan 29 16:23:02.156442 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:23:02.156462 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:23:02.160983 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 16:23:02.161175 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 16:23:02.161352 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T16:23:01 UTC (1738167781) Jan 29 16:23:02.161520 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 16:23:02.161544 kernel: intel_pstate: CPU model not supported Jan 29 16:23:02.161578 kernel: pstore: Using crash dump compression: deflate Jan 29 16:23:02.161604 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 16:23:02.161624 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:23:02.161643 kernel: Segment Routing with IPv6 Jan 29 16:23:02.161662 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:23:02.161687 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:23:02.161707 kernel: Key type dns_resolver registered Jan 29 16:23:02.161726 kernel: IPI shorthand broadcast: enabled Jan 29 16:23:02.161745 kernel: sched_clock: Marking stable (896004714, 185888963)->(1137638601, -55744924) Jan 29 16:23:02.161765 kernel: registered taskstats version 1 Jan 29 16:23:02.161787 kernel: Loading compiled-in X.509 certificates Jan 29 16:23:02.161806 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:23:02.161825 kernel: Key type .fscrypt registered Jan 29 16:23:02.161844 kernel: Key type fscrypt-provisioning registered Jan 29 16:23:02.161863 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:23:02.161882 kernel: ima: No architecture policies found Jan 29 16:23:02.161901 kernel: clk: Disabling unused clocks Jan 29 16:23:02.161920 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:23:02.161939 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:23:02.161962 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:23:02.161981 kernel: Run /init as init process Jan 29 16:23:02.162000 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:23:02.162019 kernel: with arguments: Jan 29 16:23:02.162038 kernel: /init Jan 29 16:23:02.162057 kernel: with environment: Jan 29 16:23:02.162076 kernel: HOME=/ Jan 29 16:23:02.162096 kernel: TERM=linux Jan 29 16:23:02.162115 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:23:02.162140 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:23:02.162165 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:23:02.162186 systemd[1]: Detected virtualization google. Jan 29 16:23:02.162206 systemd[1]: Detected architecture x86-64. Jan 29 16:23:02.162225 systemd[1]: Running in initrd. Jan 29 16:23:02.162244 systemd[1]: No hostname configured, using default hostname. Jan 29 16:23:02.162266 systemd[1]: Hostname set to . Jan 29 16:23:02.162289 systemd[1]: Initializing machine ID from random generator. Jan 29 16:23:02.162309 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:23:02.162329 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:23:02.162350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:23:02.162371 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:23:02.162391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:23:02.162412 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:23:02.162439 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:23:02.162476 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:23:02.162501 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:23:02.162522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:23:02.162543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:23:02.164404 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:23:02.164431 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:23:02.164452 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:23:02.164473 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:23:02.164495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:23:02.164516 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:23:02.164537 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:23:02.164558 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:23:02.164593 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:23:02.164621 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:23:02.164642 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:23:02.164663 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:23:02.164692 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:23:02.164714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:23:02.164739 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:23:02.164760 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:23:02.164781 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:23:02.164802 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:23:02.164827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:23:02.164848 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:23:02.164917 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 16:23:02.165044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:23:02.165086 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:23:02.165109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:23:02.165134 systemd-journald[184]: Journal started Jan 29 16:23:02.165182 systemd-journald[184]: Runtime Journal (/run/log/journal/f538695aea8743f58facbc6700fcd3a1) is 8M, max 148.6M, 140.6M free. Jan 29 16:23:02.119654 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 16:23:02.171849 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:23:02.172024 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:23:02.174754 kernel: Bridge firewalling registered Jan 29 16:23:02.174036 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 16:23:02.175738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:23:02.180899 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:23:02.192016 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:23:02.197896 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:23:02.206002 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:23:02.209045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:23:02.217138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:23:02.234942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:23:02.237752 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:23:02.247873 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:23:02.254232 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:23:02.264804 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:23:02.275778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:23:02.296296 dracut-cmdline[218]: dracut-dracut-053 Jan 29 16:23:02.301558 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:23:02.332613 systemd-resolved[219]: Positive Trust Anchors: Jan 29 16:23:02.333167 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:23:02.333241 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:23:02.340018 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 29 16:23:02.342175 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:23:02.367828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:23:02.410614 kernel: SCSI subsystem initialized Jan 29 16:23:02.421617 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:23:02.432619 kernel: iscsi: registered transport (tcp) Jan 29 16:23:02.463947 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:23:02.464052 kernel: QLogic iSCSI HBA Driver Jan 29 16:23:02.516884 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:23:02.522914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:23:02.602604 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:23:02.602698 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:23:02.602727 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:23:02.660640 kernel: raid6: avx2x4 gen() 17819 MB/s Jan 29 16:23:02.681666 kernel: raid6: avx2x2 gen() 17906 MB/s Jan 29 16:23:02.707611 kernel: raid6: avx2x1 gen() 13800 MB/s Jan 29 16:23:02.707671 kernel: raid6: using algorithm avx2x2 gen() 17906 MB/s Jan 29 16:23:02.734721 kernel: raid6: .... xor() 18859 MB/s, rmw enabled Jan 29 16:23:02.734786 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:23:02.764622 kernel: xor: automatically using best checksumming function avx Jan 29 16:23:02.934616 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:23:02.947696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:23:02.952798 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:23:03.006816 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 29 16:23:03.014968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:23:03.044796 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:23:03.085873 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 29 16:23:03.123253 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:23:03.139772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:23:03.254886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:23:03.273880 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:23:03.326952 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:23:03.352860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:23:03.418745 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:23:03.419611 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 29 16:23:03.419855 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:23:03.378996 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:23:03.394723 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:23:03.414896 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:23:03.490894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:23:03.510770 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:23:03.510813 kernel: AES CTR mode by8 optimization enabled Jan 29 16:23:03.491345 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:23:03.537133 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:23:03.552840 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 29 16:23:03.614219 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 29 16:23:03.614492 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 29 16:23:03.614760 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 29 16:23:03.615010 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:23:03.615232 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:23:03.615259 kernel: GPT:17805311 != 25165823 Jan 29 16:23:03.615293 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:23:03.615316 kernel: GPT:17805311 != 25165823 Jan 29 16:23:03.615336 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:23:03.615358 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:23:03.615384 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 29 16:23:03.617269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:23:03.620420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:23:03.639006 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:23:03.667330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:23:03.687852 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (447) Jan 29 16:23:03.701803 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (448) Jan 29 16:23:03.710384 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:23:03.711786 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:23:03.755368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:23:03.798962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 29 16:23:03.831625 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 29 16:23:03.853487 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 16:23:03.863893 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 29 16:23:03.872952 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 29 16:23:03.923796 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:23:03.939870 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:23:03.971788 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:23:03.972014 disk-uuid[543]: Primary Header is updated. Jan 29 16:23:03.972014 disk-uuid[543]: Secondary Entries is updated. Jan 29 16:23:03.972014 disk-uuid[543]: Secondary Header is updated. Jan 29 16:23:04.020403 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:23:05.001112 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:23:05.001191 disk-uuid[545]: The operation has completed successfully. Jan 29 16:23:05.084445 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:23:05.084608 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:23:05.136779 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:23:05.166966 sh[567]: Success Jan 29 16:23:05.191710 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 16:23:05.285484 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:23:05.303871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:23:05.336621 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:23:05.372605 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:23:05.372689 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:23:05.389283 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:23:05.389357 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:23:05.396104 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:23:05.426618 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:23:05.433209 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:23:05.434188 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:23:05.439813 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:23:05.454791 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:23:05.543710 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:23:05.543891 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:23:05.543931 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:23:05.562302 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:23:05.562401 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:23:05.588341 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:23:05.587800 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:23:05.605129 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:23:05.611774 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:23:05.632236 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:23:05.674637 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:23:05.759124 systemd-networkd[750]: lo: Link UP Jan 29 16:23:05.759138 systemd-networkd[750]: lo: Gained carrier Jan 29 16:23:05.761740 systemd-networkd[750]: Enumeration completed Jan 29 16:23:05.764748 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:23:05.773345 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:23:05.773364 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:23:05.822308 ignition[741]: Ignition 2.20.0 Jan 29 16:23:05.775084 systemd[1]: Reached target network.target - Network. Jan 29 16:23:05.822321 ignition[741]: Stage: fetch-offline Jan 29 16:23:05.777591 systemd-networkd[750]: eth0: Link UP Jan 29 16:23:05.822394 ignition[741]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:23:05.777612 systemd-networkd[750]: eth0: Gained carrier Jan 29 16:23:05.822407 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:23:05.777634 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:23:05.822631 ignition[741]: parsed url from cmdline: "" Jan 29 16:23:05.794991 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.57/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 16:23:05.822637 ignition[741]: no config URL provided Jan 29 16:23:05.825155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:23:05.822644 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:23:05.854792 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:23:05.822656 ignition[741]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:23:05.888897 unknown[762]: fetched base config from "system" Jan 29 16:23:05.822669 ignition[741]: failed to fetch config: resource requires networking Jan 29 16:23:05.888909 unknown[762]: fetched base config from "system" Jan 29 16:23:05.823037 ignition[741]: Ignition finished successfully Jan 29 16:23:05.888918 unknown[762]: fetched user config from "gcp" Jan 29 16:23:05.876640 ignition[762]: Ignition 2.20.0 Jan 29 16:23:05.898165 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:23:05.876652 ignition[762]: Stage: fetch Jan 29 16:23:05.912589 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:23:05.876942 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:23:05.956239 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:23:05.876959 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:23:05.985750 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:23:05.877101 ignition[762]: parsed url from cmdline: "" Jan 29 16:23:06.038835 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:23:05.877108 ignition[762]: no config URL provided Jan 29 16:23:06.058695 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:23:05.877118 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:23:06.064952 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:23:05.877133 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:23:06.092892 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:23:05.877164 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 29 16:23:06.099959 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:23:05.882080 ignition[762]: GET result: OK Jan 29 16:23:06.124883 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:23:05.882161 ignition[762]: parsing config with SHA512: b43ba9cbddbde49f4bf85a29ae6336ac7014662e1db20389d819dcc2fff88703dc6bfb4e9aee4b7562dcf5fbc519d602af534d6d508f9108b3d07b65fa1eeb2b Jan 29 16:23:06.140795 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:23:05.889513 ignition[762]: fetch: fetch complete Jan 29 16:23:05.889520 ignition[762]: fetch: fetch passed Jan 29 16:23:05.889595 ignition[762]: Ignition finished successfully Jan 29 16:23:05.943746 ignition[768]: Ignition 2.20.0 Jan 29 16:23:05.943756 ignition[768]: Stage: kargs Jan 29 16:23:05.943973 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:23:05.943985 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:23:05.945028 ignition[768]: kargs: kargs passed Jan 29 16:23:05.945091 ignition[768]: Ignition finished successfully Jan 29 16:23:06.036253 ignition[774]: Ignition 2.20.0 Jan 29 16:23:06.036264 ignition[774]: Stage: disks Jan 29 16:23:06.036489 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:23:06.036502 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:23:06.037612 ignition[774]: disks: disks passed Jan 29 16:23:06.037673 ignition[774]: Ignition finished successfully Jan 29 16:23:06.180703 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:23:06.369196 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:23:06.373744 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:23:06.516624 kernel: EXT4-fs (sda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:23:06.517687 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:23:06.518536 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:23:06.551711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:23:06.554399 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:23:06.584302 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:23:06.654934 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (790) Jan 29 16:23:06.654979 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:23:06.654996 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:23:06.655011 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:23:06.655027 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:23:06.655047 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:23:06.584434 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:23:06.584481 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:23:06.628368 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:23:06.664372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:23:06.694822 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:23:06.829178 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:23:06.839728 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:23:06.849706 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:23:06.859694 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:23:06.991789 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:23:07.021772 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:23:07.051780 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:23:07.046877 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:23:07.094500 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:23:07.103866 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:23:07.126886 ignition[903]: INFO : Ignition 2.20.0 Jan 29 16:23:07.126886 ignition[903]: INFO : Stage: mount Jan 29 16:23:07.126886 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:23:07.126886 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:23:07.126886 ignition[903]: INFO : mount: mount passed Jan 29 16:23:07.126886 ignition[903]: INFO : Ignition finished successfully Jan 29 16:23:07.115717 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:23:07.363015 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:23:07.367866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:23:07.418659 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (914) Jan 29 16:23:07.419696 systemd-networkd[750]: eth0: Gained IPv6LL Jan 29 16:23:07.461750 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:23:07.461792 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:23:07.461816 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:23:07.461841 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:23:07.461872 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:23:07.462610 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:23:07.496043 ignition[930]: INFO : Ignition 2.20.0 Jan 29 16:23:07.496043 ignition[930]: INFO : Stage: files Jan 29 16:23:07.511772 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:23:07.511772 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:23:07.511772 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:23:07.511772 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:23:07.511772 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:23:07.511772 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:23:07.511772 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:23:07.511772 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:23:07.508756 unknown[930]: wrote ssh authorized keys file for user: core Jan 29 16:23:07.615842 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:23:07.615842 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:23:07.649844 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:23:07.793502 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:23:07.810714 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:23:07.810714 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:23:08.143032 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:23:08.315704 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:23:08.330722 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 16:23:08.561821 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:23:08.966612 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:23:08.966612 ignition[930]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:23:09.006873 ignition[930]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:23:09.006873 ignition[930]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:23:09.006873 ignition[930]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:23:09.006873 ignition[930]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:23:09.006873 ignition[930]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:23:09.006873 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:23:09.006873 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:23:09.006873 ignition[930]: INFO : files: files passed Jan 29 16:23:09.006873 ignition[930]: INFO : Ignition finished successfully Jan 29 16:23:08.970595 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:23:09.001877 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:23:09.024818 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:23:09.072303 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:23:09.232873 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:23:09.232873 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:23:09.072473 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:23:09.287755 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:23:09.085093 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:23:09.097062 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:23:09.124834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:23:09.209201 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:23:09.209329 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:23:09.225519 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:23:09.242791 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:23:09.263903 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:23:09.271796 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:23:09.318821 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:23:09.342793 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:23:09.378513 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:23:09.390936 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:23:09.412945 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:23:09.431043 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:23:09.431264 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:23:09.465021 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:23:09.485916 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:23:09.505017 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:23:09.517170 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:23:09.556061 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:23:09.584048 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:23:09.595154 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:23:09.630069 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:23:09.640137 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:23:09.658124 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:23:09.676058 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:23:09.676274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:23:09.721100 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:23:09.746041 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:23:09.754027 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:23:09.754193 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:23:09.772069 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:23:09.772274 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:23:09.818036 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:23:09.818281 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:23:09.831188 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:23:09.896864 ignition[983]: INFO : Ignition 2.20.0 Jan 29 16:23:09.896864 ignition[983]: INFO : Stage: umount Jan 29 16:23:09.896864 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:23:09.896864 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:23:09.896864 ignition[983]: INFO : umount: umount passed Jan 29 16:23:09.896864 ignition[983]: INFO : Ignition finished successfully Jan 29 16:23:09.831378 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:23:09.855966 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:23:09.904708 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:23:09.904986 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:23:09.930032 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:23:09.982757 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:23:09.983083 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:23:09.995023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:23:09.995224 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:23:10.030408 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:23:10.031796 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:23:10.031915 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:23:10.048377 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:23:10.048510 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:23:10.069180 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:23:10.069307 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:23:10.089048 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:23:10.089124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:23:10.106943 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:23:10.107026 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:23:10.117006 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:23:10.117079 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:23:10.133987 systemd[1]: Stopped target network.target - Network. Jan 29 16:23:10.148947 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:23:10.149039 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:23:10.164003 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:23:10.196758 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:23:10.200684 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:23:10.216841 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:23:10.224951 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:23:10.242983 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:23:10.243075 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:23:10.258002 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:23:10.258075 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:23:10.291924 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:23:10.292016 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:23:10.317943 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:23:10.318034 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:23:10.325994 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:23:10.326074 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:23:10.361128 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:23:10.380891 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:23:10.399308 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:23:10.399454 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:23:10.412406 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:23:10.412710 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:23:10.412852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:23:10.442436 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:23:10.443904 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:23:10.443975 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:23:10.465715 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:23:10.467903 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:23:10.467987 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:23:10.502064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:23:10.502143 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:23:10.530044 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:23:10.530121 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:23:10.547929 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:23:10.548022 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:23:10.556076 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:23:10.575364 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:23:10.575455 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:23:10.575972 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:23:10.970728 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 16:23:10.576140 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:23:10.611004 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:23:10.611090 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:23:10.623989 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:23:10.624067 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:23:10.651906 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:23:10.652000 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:23:10.681065 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:23:10.681156 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:23:10.710013 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:23:10.710116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:23:10.755750 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:23:10.767856 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:23:10.767934 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:23:10.796010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:23:10.796085 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:23:10.818286 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:23:10.818388 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:23:10.818948 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:23:10.819065 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:23:10.828267 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:23:10.828379 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:23:10.857147 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:23:10.871798 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:23:10.915975 systemd[1]: Switching root. Jan 29 16:23:11.219747 systemd-journald[184]: Journal stopped Jan 29 16:23:13.890434 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:23:13.890476 kernel: SELinux: policy capability open_perms=1 Jan 29 16:23:13.890491 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:23:13.890504 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:23:13.890515 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:23:13.890526 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:23:13.890538 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:23:13.890550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:23:13.890588 kernel: audit: type=1403 audit(1738167791.629:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:23:13.890610 systemd[1]: Successfully loaded SELinux policy in 93.844ms. Jan 29 16:23:13.890624 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.694ms. Jan 29 16:23:13.890638 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:23:13.890651 systemd[1]: Detected virtualization google. Jan 29 16:23:13.890663 systemd[1]: Detected architecture x86-64. Jan 29 16:23:13.890681 systemd[1]: Detected first boot. Jan 29 16:23:13.890695 systemd[1]: Initializing machine ID from random generator. Jan 29 16:23:13.890708 zram_generator::config[1026]: No configuration found. Jan 29 16:23:13.890721 kernel: Guest personality initialized and is inactive Jan 29 16:23:13.890734 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:23:13.890749 kernel: Initialized host personality Jan 29 16:23:13.890761 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:23:13.890772 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:23:13.890793 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:23:13.890805 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:23:13.890820 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:23:13.890832 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:23:13.890846 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:23:13.890858 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:23:13.890875 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:23:13.890888 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:23:13.890901 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:23:13.890914 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:23:13.890927 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:23:13.890940 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:23:13.890953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:23:13.890970 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:23:13.890983 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:23:13.890996 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:23:13.891009 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:23:13.891023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:23:13.891043 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:23:13.891056 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:23:13.891070 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:23:13.891086 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:23:13.891100 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:23:13.891114 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:23:13.891127 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:23:13.891141 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:23:13.891154 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:23:13.891167 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:23:13.891180 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:23:13.891197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:23:13.891210 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:23:13.891224 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:23:13.891238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:23:13.891254 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:23:13.891269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:23:13.891282 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:23:13.891296 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:23:13.891309 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:23:13.891323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:23:13.891336 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:23:13.891350 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:23:13.891367 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:23:13.891381 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:23:13.891394 systemd[1]: Reached target machines.target - Containers. Jan 29 16:23:13.891409 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:23:13.891423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:23:13.891437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:23:13.891450 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:23:13.891464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:23:13.891477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:23:13.891494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:23:13.891507 kernel: ACPI: bus type drm_connector registered Jan 29 16:23:13.891520 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:23:13.891533 kernel: fuse: init (API version 7.39) Jan 29 16:23:13.891546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:23:13.891591 kernel: loop: module loaded Jan 29 16:23:13.891615 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:23:13.891633 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:23:13.891647 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:23:13.891660 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:23:13.891674 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:23:13.891688 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:23:13.891702 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:23:13.891716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:23:13.891758 systemd-journald[1114]: Collecting audit messages is disabled. Jan 29 16:23:13.891796 systemd-journald[1114]: Journal started Jan 29 16:23:13.891824 systemd-journald[1114]: Runtime Journal (/run/log/journal/d01aed5063e24277b59ed751469858cc) is 8M, max 148.6M, 140.6M free. Jan 29 16:23:12.625996 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:23:12.638481 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:23:12.639092 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:23:13.920609 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:23:13.945593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:23:13.963607 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:23:13.978621 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:23:14.011128 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:23:14.011235 systemd[1]: Stopped verity-setup.service. Jan 29 16:23:14.036605 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:23:14.051597 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:23:14.062254 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:23:14.073026 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:23:14.084006 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:23:14.094009 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:23:14.104020 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:23:14.115009 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:23:14.125275 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:23:14.137224 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:23:14.149141 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:23:14.149443 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:23:14.161151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:23:14.161446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:23:14.173123 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:23:14.173411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:23:14.184127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:23:14.184415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:23:14.196233 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:23:14.196537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:23:14.207167 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:23:14.207476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:23:14.218240 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:23:14.228217 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:23:14.240214 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:23:14.252217 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:23:14.264254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:23:14.289701 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:23:14.305738 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:23:14.326965 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:23:14.336787 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:23:14.337059 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:23:14.349933 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:23:14.365872 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:23:14.387938 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:23:14.397947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:23:14.405032 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:23:14.423751 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:23:14.435985 systemd-journald[1114]: Time spent on flushing to /var/log/journal/d01aed5063e24277b59ed751469858cc is 102.750ms for 945 entries. Jan 29 16:23:14.435985 systemd-journald[1114]: System Journal (/var/log/journal/d01aed5063e24277b59ed751469858cc) is 8M, max 584.8M, 576.8M free. Jan 29 16:23:14.560184 systemd-journald[1114]: Received client request to flush runtime journal. Jan 29 16:23:14.444403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:23:14.450852 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:23:14.463729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:23:14.473642 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:23:14.493048 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:23:14.523067 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:23:14.534836 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:23:14.553524 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:23:14.565173 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:23:14.590448 kernel: loop0: detected capacity change from 0 to 52152 Jan 29 16:23:14.591878 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:23:14.604469 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:23:14.618412 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:23:14.628176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:23:14.649995 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:23:14.662597 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:23:14.679893 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:23:14.693538 udevadm[1152]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:23:14.709456 kernel: loop1: detected capacity change from 0 to 147912 Jan 29 16:23:14.710520 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:23:14.729954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:23:14.748699 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:23:14.754011 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:23:14.804259 kernel: loop2: detected capacity change from 0 to 138176 Jan 29 16:23:14.822420 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 29 16:23:14.822456 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 29 16:23:14.835285 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:23:14.922615 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 16:23:15.066647 kernel: loop4: detected capacity change from 0 to 52152 Jan 29 16:23:15.108639 kernel: loop5: detected capacity change from 0 to 147912 Jan 29 16:23:15.167632 kernel: loop6: detected capacity change from 0 to 138176 Jan 29 16:23:15.232699 kernel: loop7: detected capacity change from 0 to 210664 Jan 29 16:23:15.277987 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 29 16:23:15.279047 (sd-merge)[1174]: Merged extensions into '/usr'. Jan 29 16:23:15.292656 systemd[1]: Reload requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:23:15.293085 systemd[1]: Reloading... Jan 29 16:23:15.391594 zram_generator::config[1198]: No configuration found. Jan 29 16:23:15.681304 ldconfig[1145]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:23:15.734653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:23:15.869479 systemd[1]: Reloading finished in 575 ms. Jan 29 16:23:15.886844 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:23:15.897216 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:23:15.909168 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:23:15.931203 systemd[1]: Starting ensure-sysext.service... Jan 29 16:23:15.946813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:23:15.966921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:23:15.992962 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:23:15.993407 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:23:15.994538 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:23:15.995117 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 29 16:23:15.995242 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 29 16:23:16.001425 systemd[1]: Reload requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:23:16.001453 systemd[1]: Reloading... Jan 29 16:23:16.004253 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:23:16.006616 systemd-tmpfiles[1244]: Skipping /boot Jan 29 16:23:16.042418 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Jan 29 16:23:16.053899 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:23:16.053926 systemd-tmpfiles[1244]: Skipping /boot Jan 29 16:23:16.154594 zram_generator::config[1274]: No configuration found. Jan 29 16:23:16.435483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:23:16.464943 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 29 16:23:16.542103 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:23:16.542148 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 16:23:16.565953 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:23:16.617715 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:23:16.633184 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:23:16.636828 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1289) Jan 29 16:23:16.636881 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 29 16:23:16.633535 systemd[1]: Reloading finished in 631 ms. Jan 29 16:23:16.649956 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 16:23:16.659722 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:23:16.685466 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:23:16.713665 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jan 29 16:23:16.733892 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:23:16.748594 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:23:16.749663 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:23:16.763831 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:23:16.783257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:23:16.802251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:23:16.821836 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:23:16.889686 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:23:16.902860 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:23:16.913672 augenrules[1372]: No rules Jan 29 16:23:16.915084 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:23:16.915427 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:23:16.927493 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:23:16.951933 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:23:16.985032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 16:23:17.000683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:23:17.007841 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:23:17.008272 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:23:17.009996 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:23:17.032928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:23:17.052809 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:23:17.055917 lvm[1383]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:23:17.061888 augenrules[1382]: /sbin/augenrules: No change Jan 29 16:23:17.067812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:23:17.083556 augenrules[1403]: No rules Jan 29 16:23:17.086369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:23:17.109858 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 16:23:17.118991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:23:17.125987 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:23:17.137748 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:23:17.137892 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:23:17.153790 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:23:17.157744 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:23:17.184891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:23:17.194705 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:23:17.194778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:23:17.204389 systemd[1]: Finished ensure-sysext.service. Jan 29 16:23:17.216196 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:23:17.216701 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:23:17.228164 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:23:17.238317 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:23:17.261151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:23:17.261476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:23:17.262112 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:23:17.262779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:23:17.263315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:23:17.263577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:23:17.264072 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:23:17.266114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:23:17.272199 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:23:17.272859 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:23:17.286906 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 16:23:17.296495 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:23:17.303371 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:23:17.313876 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 29 16:23:17.313988 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:23:17.314100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:23:17.345161 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:23:17.422335 systemd-networkd[1350]: lo: Link UP Jan 29 16:23:17.422350 systemd-networkd[1350]: lo: Gained carrier Jan 29 16:23:17.426136 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:23:17.431214 systemd-networkd[1350]: Enumeration completed Jan 29 16:23:17.431965 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:23:17.435172 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:23:17.435182 systemd-networkd[1350]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:23:17.435880 systemd-networkd[1350]: eth0: Link UP Jan 29 16:23:17.435887 systemd-networkd[1350]: eth0: Gained carrier Jan 29 16:23:17.435915 systemd-networkd[1350]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:23:17.441857 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:23:17.447977 systemd-resolved[1354]: Positive Trust Anchors: Jan 29 16:23:17.447997 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:23:17.448063 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:23:17.448708 systemd-networkd[1350]: eth0: DHCPv4 address 10.128.0.57/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 16:23:17.454320 systemd-resolved[1354]: Defaulting to hostname 'linux'. Jan 29 16:23:17.464814 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:23:17.475982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:23:17.486246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:23:17.498213 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 29 16:23:17.512127 systemd[1]: Reached target network.target - Network. Jan 29 16:23:17.520777 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:23:17.531770 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:23:17.541985 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:23:17.553829 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:23:17.565956 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:23:17.575945 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:23:17.587764 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:23:17.598734 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:23:17.598796 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:23:17.607764 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:23:17.618959 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:23:17.630630 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:23:17.641442 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:23:17.653034 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:23:17.664755 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:23:17.685597 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:23:17.696419 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:23:17.709010 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:23:17.721013 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:23:17.732777 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:23:17.742721 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:23:17.750862 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:23:17.750926 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:23:17.756765 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:23:17.780374 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:23:17.802791 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:23:17.822628 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:23:17.845188 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:23:17.854715 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:23:17.855400 jq[1462]: false Jan 29 16:23:17.864826 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:23:17.885790 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 16:23:17.899753 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:23:17.916182 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:23:17.925406 extend-filesystems[1463]: Found loop4 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found loop5 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found loop6 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found loop7 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found sda Jan 29 16:23:17.934804 extend-filesystems[1463]: Found sda1 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found sda2 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found sda3 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found usr Jan 29 16:23:17.934804 extend-filesystems[1463]: Found sda4 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found sda6 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found sda7 Jan 29 16:23:17.934804 extend-filesystems[1463]: Found sda9 Jan 29 16:23:17.934804 extend-filesystems[1463]: Checking size of /dev/sda9 Jan 29 16:23:18.105596 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 29 16:23:18.105665 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 29 16:23:18.105693 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1279) Jan 29 16:23:17.973824 dbus-daemon[1461]: [system] SELinux support is enabled Jan 29 16:23:18.106274 coreos-metadata[1460]: Jan 29 16:23:17.938 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 29 16:23:18.106274 coreos-metadata[1460]: Jan 29 16:23:17.943 INFO Fetch successful Jan 29 16:23:18.106274 coreos-metadata[1460]: Jan 29 16:23:17.944 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 29 16:23:18.106274 coreos-metadata[1460]: Jan 29 16:23:17.944 INFO Fetch successful Jan 29 16:23:18.106274 coreos-metadata[1460]: Jan 29 16:23:17.944 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 29 16:23:18.106274 coreos-metadata[1460]: Jan 29 16:23:17.946 INFO Fetch successful Jan 29 16:23:18.106274 coreos-metadata[1460]: Jan 29 16:23:17.946 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 29 16:23:18.106274 coreos-metadata[1460]: Jan 29 16:23:17.947 INFO Fetch successful Jan 29 16:23:18.107787 extend-filesystems[1463]: Resized partition /dev/sda9 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:41 UTC 2025 (1): Starting Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: ---------------------------------------------------- Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: corporation. Support and training for ntp-4 are Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: available at https://www.nwtime.org/support Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: ---------------------------------------------------- Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: proto: precision = 0.106 usec (-23) Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: basedate set to 2025-01-17 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: gps base set to 2025-01-19 (week 2350) Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: Listen normally on 3 eth0 10.128.0.57:123 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: Listen normally on 4 lo [::1]:123 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: bind(21) AF_INET6 fe80::4001:aff:fe80:39%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:39%2#123 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: failed to init interface for address fe80::4001:aff:fe80:39%2 Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: Listening on routing socket on fd #21 for interface updates Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:23:18.137809 ntpd[1467]: 29 Jan 16:23:18 ntpd[1467]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:23:17.936791 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:23:17.980154 dbus-daemon[1461]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1350 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 16:23:18.139427 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:23:18.139427 extend-filesystems[1482]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:23:18.139427 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 29 16:23:18.139427 extend-filesystems[1482]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 29 16:23:17.974452 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:23:18.014785 ntpd[1467]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:41 UTC 2025 (1): Starting Jan 29 16:23:18.182420 extend-filesystems[1463]: Resized filesystem in /dev/sda9 Jan 29 16:23:17.992478 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 29 16:23:18.014817 ntpd[1467]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:23:17.993426 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:23:18.191747 update_engine[1487]: I20250129 16:23:18.158170 1487 main.cc:92] Flatcar Update Engine starting Jan 29 16:23:18.191747 update_engine[1487]: I20250129 16:23:18.167284 1487 update_check_scheduler.cc:74] Next update check in 11m20s Jan 29 16:23:18.014833 ntpd[1467]: ---------------------------------------------------- Jan 29 16:23:18.001944 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:23:18.192414 jq[1491]: true Jan 29 16:23:18.014847 ntpd[1467]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:23:18.038734 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:23:18.014862 ntpd[1467]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:23:18.042589 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:23:18.014877 ntpd[1467]: corporation. Support and training for ntp-4 are Jan 29 16:23:18.094706 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:23:18.014893 ntpd[1467]: available at https://www.nwtime.org/support Jan 29 16:23:18.095077 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:23:18.014908 ntpd[1467]: ---------------------------------------------------- Jan 29 16:23:18.097598 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:23:18.020338 ntpd[1467]: proto: precision = 0.106 usec (-23) Jan 29 16:23:18.097981 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:23:18.020840 ntpd[1467]: basedate set to 2025-01-17 Jan 29 16:23:18.116127 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:23:18.020864 ntpd[1467]: gps base set to 2025-01-19 (week 2350) Jan 29 16:23:18.116451 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:23:18.026706 ntpd[1467]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:23:18.134223 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:23:18.026783 ntpd[1467]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:23:18.134540 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:23:18.028007 ntpd[1467]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:23:18.028074 ntpd[1467]: Listen normally on 3 eth0 10.128.0.57:123 Jan 29 16:23:18.028137 ntpd[1467]: Listen normally on 4 lo [::1]:123 Jan 29 16:23:18.028207 ntpd[1467]: bind(21) AF_INET6 fe80::4001:aff:fe80:39%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:23:18.028238 ntpd[1467]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:39%2#123 Jan 29 16:23:18.028394 ntpd[1467]: failed to init interface for address fe80::4001:aff:fe80:39%2 Jan 29 16:23:18.028447 ntpd[1467]: Listening on routing socket on fd #21 for interface updates Jan 29 16:23:18.031748 ntpd[1467]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:23:18.032207 ntpd[1467]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:23:18.198234 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 16:23:18.198268 systemd-logind[1483]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 29 16:23:18.198300 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:23:18.199415 systemd-logind[1483]: New seat seat0. Jan 29 16:23:18.203470 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:23:18.238586 jq[1499]: true Jan 29 16:23:18.251334 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:23:18.266649 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:23:18.303154 dbus-daemon[1461]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 16:23:18.332087 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:23:18.355598 tar[1497]: linux-amd64/helm Jan 29 16:23:18.377326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:23:18.377703 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:23:18.377961 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:23:18.403091 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:23:18.409239 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 16:23:18.418038 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:23:18.418305 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:23:18.437388 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:23:18.462834 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:23:18.490978 systemd[1]: Starting sshkeys.service... Jan 29 16:23:18.552718 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:23:18.572789 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:23:18.671593 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:23:18.737604 coreos-metadata[1533]: Jan 29 16:23:18.736 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 29 16:23:18.739028 coreos-metadata[1533]: Jan 29 16:23:18.738 INFO Fetch failed with 404: resource not found Jan 29 16:23:18.739028 coreos-metadata[1533]: Jan 29 16:23:18.738 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 29 16:23:18.739028 coreos-metadata[1533]: Jan 29 16:23:18.738 INFO Fetch successful Jan 29 16:23:18.739028 coreos-metadata[1533]: Jan 29 16:23:18.738 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 29 16:23:18.741595 coreos-metadata[1533]: Jan 29 16:23:18.741 INFO Fetch failed with 404: resource not found Jan 29 16:23:18.741595 coreos-metadata[1533]: Jan 29 16:23:18.741 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 29 16:23:18.744523 coreos-metadata[1533]: Jan 29 16:23:18.744 INFO Fetch failed with 404: resource not found Jan 29 16:23:18.744523 coreos-metadata[1533]: Jan 29 16:23:18.744 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 29 16:23:18.744523 coreos-metadata[1533]: Jan 29 16:23:18.744 INFO Fetch successful Jan 29 16:23:18.748698 unknown[1533]: wrote ssh authorized keys file for user: core Jan 29 16:23:18.789299 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:23:18.791958 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:23:18.810060 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:23:18.814869 update-ssh-keys[1553]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:23:18.818145 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:23:18.835516 systemd[1]: Finished sshkeys.service. Jan 29 16:23:18.840618 dbus-daemon[1461]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 16:23:18.841270 dbus-daemon[1461]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1529 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 16:23:18.846349 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 16:23:18.877964 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 16:23:18.888333 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:23:18.888926 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:23:18.915290 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:23:18.997315 polkitd[1560]: Started polkitd version 121 Jan 29 16:23:19.004317 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:23:19.016382 ntpd[1467]: bind(24) AF_INET6 fe80::4001:aff:fe80:39%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:23:19.018010 ntpd[1467]: 29 Jan 16:23:19 ntpd[1467]: bind(24) AF_INET6 fe80::4001:aff:fe80:39%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:23:19.018010 ntpd[1467]: 29 Jan 16:23:19 ntpd[1467]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:39%2#123 Jan 29 16:23:19.018010 ntpd[1467]: 29 Jan 16:23:19 ntpd[1467]: failed to init interface for address fe80::4001:aff:fe80:39%2 Jan 29 16:23:19.016431 ntpd[1467]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:39%2#123 Jan 29 16:23:19.016452 ntpd[1467]: failed to init interface for address fe80::4001:aff:fe80:39%2 Jan 29 16:23:19.018714 polkitd[1560]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 16:23:19.018817 polkitd[1560]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 16:23:19.021868 polkitd[1560]: Finished loading, compiling and executing 2 rules Jan 29 16:23:19.025830 dbus-daemon[1461]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 16:23:19.026246 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:23:19.030907 polkitd[1560]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 16:23:19.042117 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:23:19.053022 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:23:19.063277 containerd[1500]: time="2025-01-29T16:23:19.062164312Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:23:19.066877 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 16:23:19.082424 systemd-resolved[1354]: System hostname changed to 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal'. Jan 29 16:23:19.083508 systemd-hostnamed[1529]: Hostname set to (transient) Jan 29 16:23:19.115454 containerd[1500]: time="2025-01-29T16:23:19.115179807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119029 containerd[1500]: time="2025-01-29T16:23:19.118616243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119029 containerd[1500]: time="2025-01-29T16:23:19.118681188Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:23:19.119029 containerd[1500]: time="2025-01-29T16:23:19.118707503Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:23:19.119029 containerd[1500]: time="2025-01-29T16:23:19.118970422Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:23:19.119029 containerd[1500]: time="2025-01-29T16:23:19.119013943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119320 containerd[1500]: time="2025-01-29T16:23:19.119124062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119320 containerd[1500]: time="2025-01-29T16:23:19.119143928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119946 containerd[1500]: time="2025-01-29T16:23:19.119604066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119946 containerd[1500]: time="2025-01-29T16:23:19.119633618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119946 containerd[1500]: time="2025-01-29T16:23:19.119654885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119946 containerd[1500]: time="2025-01-29T16:23:19.119687943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:23:19.119946 containerd[1500]: time="2025-01-29T16:23:19.119849908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:23:19.120533 containerd[1500]: time="2025-01-29T16:23:19.120237677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:23:19.120700 containerd[1500]: time="2025-01-29T16:23:19.120557551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:23:19.120700 containerd[1500]: time="2025-01-29T16:23:19.120656722Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:23:19.121184 containerd[1500]: time="2025-01-29T16:23:19.120832549Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:23:19.121184 containerd[1500]: time="2025-01-29T16:23:19.120930618Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:23:19.127112 containerd[1500]: time="2025-01-29T16:23:19.126897904Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:23:19.127112 containerd[1500]: time="2025-01-29T16:23:19.126977707Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:23:19.127708 containerd[1500]: time="2025-01-29T16:23:19.127011014Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:23:19.127708 containerd[1500]: time="2025-01-29T16:23:19.127348139Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:23:19.127708 containerd[1500]: time="2025-01-29T16:23:19.127377114Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:23:19.127708 containerd[1500]: time="2025-01-29T16:23:19.127593137Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128490303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128697737Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128726273Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128748969Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128770691Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128803146Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128823069Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128845350Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128870204Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128892317Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128916015Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128937520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128972767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.129396 containerd[1500]: time="2025-01-29T16:23:19.128999607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129021259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129044996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129065794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129088166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129109011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129130213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129149790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129173450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129191858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129209809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129227297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129249853Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129283751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129305910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.130024 containerd[1500]: time="2025-01-29T16:23:19.129325531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:23:19.131592 containerd[1500]: time="2025-01-29T16:23:19.130712240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:23:19.131592 containerd[1500]: time="2025-01-29T16:23:19.130821397Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:23:19.131592 containerd[1500]: time="2025-01-29T16:23:19.130844829Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:23:19.131592 containerd[1500]: time="2025-01-29T16:23:19.130869659Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:23:19.131592 containerd[1500]: time="2025-01-29T16:23:19.130888397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.131592 containerd[1500]: time="2025-01-29T16:23:19.130913198Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:23:19.131592 containerd[1500]: time="2025-01-29T16:23:19.130931149Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:23:19.131592 containerd[1500]: time="2025-01-29T16:23:19.130949077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:23:19.131959 containerd[1500]: time="2025-01-29T16:23:19.131391973Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:23:19.131959 containerd[1500]: time="2025-01-29T16:23:19.131481827Z" level=info msg="Connect containerd service" Jan 29 16:23:19.131959 containerd[1500]: time="2025-01-29T16:23:19.131529602Z" level=info msg="using legacy CRI server" Jan 29 16:23:19.131959 containerd[1500]: time="2025-01-29T16:23:19.131541246Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:23:19.133015 containerd[1500]: time="2025-01-29T16:23:19.132475649Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:23:19.133602 containerd[1500]: time="2025-01-29T16:23:19.133546494Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:23:19.133842 containerd[1500]: time="2025-01-29T16:23:19.133798304Z" level=info msg="Start subscribing containerd event" Jan 29 16:23:19.133945 containerd[1500]: time="2025-01-29T16:23:19.133930340Z" level=info msg="Start recovering state" Jan 29 16:23:19.134314 containerd[1500]: time="2025-01-29T16:23:19.134078714Z" level=info msg="Start event monitor" Jan 29 16:23:19.134314 containerd[1500]: time="2025-01-29T16:23:19.134120576Z" level=info msg="Start snapshots syncer" Jan 29 16:23:19.134314 containerd[1500]: time="2025-01-29T16:23:19.134138489Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:23:19.134314 containerd[1500]: time="2025-01-29T16:23:19.134154377Z" level=info msg="Start streaming server" Jan 29 16:23:19.135151 containerd[1500]: time="2025-01-29T16:23:19.135030390Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:23:19.135151 containerd[1500]: time="2025-01-29T16:23:19.135105175Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:23:19.136902 containerd[1500]: time="2025-01-29T16:23:19.135310756Z" level=info msg="containerd successfully booted in 0.075029s" Jan 29 16:23:19.135431 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:23:19.259805 systemd-networkd[1350]: eth0: Gained IPv6LL Jan 29 16:23:19.264913 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:23:19.276946 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:23:19.293900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:19.313960 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:23:19.331442 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 29 16:23:19.356965 init.sh[1581]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 29 16:23:19.357361 init.sh[1581]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 29 16:23:19.357361 init.sh[1581]: + /usr/bin/google_instance_setup Jan 29 16:23:19.381129 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:23:19.408645 tar[1497]: linux-amd64/LICENSE Jan 29 16:23:19.409292 tar[1497]: linux-amd64/README.md Jan 29 16:23:19.429097 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:23:19.885186 instance-setup[1586]: INFO Running google_set_multiqueue. Jan 29 16:23:19.905199 instance-setup[1586]: INFO Set channels for eth0 to 2. Jan 29 16:23:19.910378 instance-setup[1586]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 29 16:23:19.912725 instance-setup[1586]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 29 16:23:19.912798 instance-setup[1586]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 29 16:23:19.914689 instance-setup[1586]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 29 16:23:19.914789 instance-setup[1586]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 29 16:23:19.917644 instance-setup[1586]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 29 16:23:19.917710 instance-setup[1586]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 29 16:23:19.919640 instance-setup[1586]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 29 16:23:19.927763 instance-setup[1586]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 29 16:23:19.933036 instance-setup[1586]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 29 16:23:19.935100 instance-setup[1586]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 29 16:23:19.935159 instance-setup[1586]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 29 16:23:19.958118 init.sh[1581]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 29 16:23:20.121634 startup-script[1624]: INFO Starting startup scripts. Jan 29 16:23:20.127930 startup-script[1624]: INFO No startup scripts found in metadata. Jan 29 16:23:20.128007 startup-script[1624]: INFO Finished running startup scripts. Jan 29 16:23:20.148891 init.sh[1581]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 29 16:23:20.148891 init.sh[1581]: + daemon_pids=() Jan 29 16:23:20.148891 init.sh[1581]: + for d in accounts clock_skew network Jan 29 16:23:20.150998 init.sh[1581]: + daemon_pids+=($!) Jan 29 16:23:20.150998 init.sh[1581]: + for d in accounts clock_skew network Jan 29 16:23:20.151144 init.sh[1627]: + /usr/bin/google_accounts_daemon Jan 29 16:23:20.151503 init.sh[1581]: + daemon_pids+=($!) Jan 29 16:23:20.151503 init.sh[1581]: + for d in accounts clock_skew network Jan 29 16:23:20.151619 init.sh[1628]: + /usr/bin/google_clock_skew_daemon Jan 29 16:23:20.151913 init.sh[1581]: + daemon_pids+=($!) Jan 29 16:23:20.151913 init.sh[1581]: + NOTIFY_SOCKET=/run/systemd/notify Jan 29 16:23:20.151913 init.sh[1581]: + /usr/bin/systemd-notify --ready Jan 29 16:23:20.152041 init.sh[1629]: + /usr/bin/google_network_daemon Jan 29 16:23:20.164030 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 29 16:23:20.176397 init.sh[1581]: + wait -n 1627 1628 1629 Jan 29 16:23:20.390766 google-networking[1629]: INFO Starting Google Networking daemon. Jan 29 16:23:20.503098 google-clock-skew[1628]: INFO Starting Google Clock Skew daemon. Jan 29 16:23:20.510460 google-clock-skew[1628]: INFO Clock drift token has changed: 0. Jan 29 16:23:20.568468 groupadd[1639]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 29 16:23:20.572368 groupadd[1639]: group added to /etc/gshadow: name=google-sudoers Jan 29 16:23:20.630016 groupadd[1639]: new group: name=google-sudoers, GID=1000 Jan 29 16:23:20.662120 google-accounts[1627]: INFO Starting Google Accounts daemon. Jan 29 16:23:20.675709 google-accounts[1627]: WARNING OS Login not installed. Jan 29 16:23:20.677856 google-accounts[1627]: INFO Creating a new user account for 0. Jan 29 16:23:20.684136 init.sh[1647]: useradd: invalid user name '0': use --badname to ignore Jan 29 16:23:20.684759 google-accounts[1627]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 29 16:23:20.929918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:20.941834 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:23:20.947192 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:23:20.952340 systemd[1]: Startup finished in 1.079s (kernel) + 9.839s (initrd) + 9.405s (userspace) = 20.323s. Jan 29 16:23:21.002625 systemd-resolved[1354]: Clock change detected. Flushing caches. Jan 29 16:23:21.002919 google-clock-skew[1628]: INFO Synced system time with hardware clock. Jan 29 16:23:21.926268 kubelet[1654]: E0129 16:23:21.926174 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:23:21.929367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:23:21.929675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:23:21.930350 systemd[1]: kubelet.service: Consumed 1.219s CPU time, 243.1M memory peak. Jan 29 16:23:22.034383 ntpd[1467]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:39%2]:123 Jan 29 16:23:22.034949 ntpd[1467]: 29 Jan 16:23:22 ntpd[1467]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:39%2]:123 Jan 29 16:23:23.615044 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:23:23.619968 systemd[1]: Started sshd@0-10.128.0.57:22-147.75.109.163:57044.service - OpenSSH per-connection server daemon (147.75.109.163:57044). Jan 29 16:23:23.939930 sshd[1667]: Accepted publickey for core from 147.75.109.163 port 57044 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:23:23.941890 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:23.955980 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:23:23.960917 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:23:23.964640 systemd-logind[1483]: New session 1 of user core. Jan 29 16:23:23.989227 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:23:23.996110 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:23:24.021495 (systemd)[1671]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:23:24.025203 systemd-logind[1483]: New session c1 of user core. Jan 29 16:23:24.206965 systemd[1671]: Queued start job for default target default.target. Jan 29 16:23:24.218167 systemd[1671]: Created slice app.slice - User Application Slice. Jan 29 16:23:24.218220 systemd[1671]: Reached target paths.target - Paths. Jan 29 16:23:24.218433 systemd[1671]: Reached target timers.target - Timers. Jan 29 16:23:24.220270 systemd[1671]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:23:24.243777 systemd[1671]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:23:24.243973 systemd[1671]: Reached target sockets.target - Sockets. Jan 29 16:23:24.244519 systemd[1671]: Reached target basic.target - Basic System. Jan 29 16:23:24.244650 systemd[1671]: Reached target default.target - Main User Target. Jan 29 16:23:24.244705 systemd[1671]: Startup finished in 209ms. Jan 29 16:23:24.244988 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:23:24.259826 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:23:24.489973 systemd[1]: Started sshd@1-10.128.0.57:22-147.75.109.163:57052.service - OpenSSH per-connection server daemon (147.75.109.163:57052). Jan 29 16:23:24.785628 sshd[1682]: Accepted publickey for core from 147.75.109.163 port 57052 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:23:24.785031 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:24.792895 systemd-logind[1483]: New session 2 of user core. Jan 29 16:23:24.800819 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:23:24.998929 sshd[1684]: Connection closed by 147.75.109.163 port 57052 Jan 29 16:23:24.999864 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:25.004592 systemd[1]: sshd@1-10.128.0.57:22-147.75.109.163:57052.service: Deactivated successfully. Jan 29 16:23:25.006938 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:23:25.008712 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:23:25.010285 systemd-logind[1483]: Removed session 2. Jan 29 16:23:25.054962 systemd[1]: Started sshd@2-10.128.0.57:22-147.75.109.163:57062.service - OpenSSH per-connection server daemon (147.75.109.163:57062). Jan 29 16:23:25.349146 sshd[1690]: Accepted publickey for core from 147.75.109.163 port 57062 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:23:25.350970 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:25.356838 systemd-logind[1483]: New session 3 of user core. Jan 29 16:23:25.367833 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:23:25.556892 sshd[1692]: Connection closed by 147.75.109.163 port 57062 Jan 29 16:23:25.557862 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:25.562463 systemd[1]: sshd@2-10.128.0.57:22-147.75.109.163:57062.service: Deactivated successfully. Jan 29 16:23:25.564904 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:23:25.566935 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:23:25.568497 systemd-logind[1483]: Removed session 3. Jan 29 16:23:25.625998 systemd[1]: Started sshd@3-10.128.0.57:22-147.75.109.163:57078.service - OpenSSH per-connection server daemon (147.75.109.163:57078). Jan 29 16:23:25.914745 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 57078 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:23:25.915475 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:25.921384 systemd-logind[1483]: New session 4 of user core. Jan 29 16:23:25.927770 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:23:26.126338 sshd[1700]: Connection closed by 147.75.109.163 port 57078 Jan 29 16:23:26.127236 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:26.131738 systemd[1]: sshd@3-10.128.0.57:22-147.75.109.163:57078.service: Deactivated successfully. Jan 29 16:23:26.134247 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:23:26.136354 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:23:26.137989 systemd-logind[1483]: Removed session 4. Jan 29 16:23:26.187985 systemd[1]: Started sshd@4-10.128.0.57:22-147.75.109.163:57094.service - OpenSSH per-connection server daemon (147.75.109.163:57094). Jan 29 16:23:26.480823 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 57094 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:23:26.482422 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:26.488732 systemd-logind[1483]: New session 5 of user core. Jan 29 16:23:26.495810 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:23:26.676949 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:23:26.677470 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:23:26.695719 sudo[1709]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:26.738749 sshd[1708]: Connection closed by 147.75.109.163 port 57094 Jan 29 16:23:26.740347 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:26.746156 systemd[1]: sshd@4-10.128.0.57:22-147.75.109.163:57094.service: Deactivated successfully. Jan 29 16:23:26.748474 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:23:26.749601 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:23:26.751217 systemd-logind[1483]: Removed session 5. Jan 29 16:23:26.798973 systemd[1]: Started sshd@5-10.128.0.57:22-147.75.109.163:57098.service - OpenSSH per-connection server daemon (147.75.109.163:57098). Jan 29 16:23:27.088317 sshd[1715]: Accepted publickey for core from 147.75.109.163 port 57098 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:23:27.089784 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:27.095949 systemd-logind[1483]: New session 6 of user core. Jan 29 16:23:27.106813 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:23:27.266653 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:23:27.267155 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:23:27.272511 sudo[1719]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:27.286295 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:23:27.286805 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:23:27.303053 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:23:27.341306 augenrules[1741]: No rules Jan 29 16:23:27.343410 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:23:27.343813 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:23:27.346033 sudo[1718]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:27.388489 sshd[1717]: Connection closed by 147.75.109.163 port 57098 Jan 29 16:23:27.389396 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:27.394022 systemd[1]: sshd@5-10.128.0.57:22-147.75.109.163:57098.service: Deactivated successfully. Jan 29 16:23:27.396493 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:23:27.398500 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:23:27.400000 systemd-logind[1483]: Removed session 6. Jan 29 16:23:27.449059 systemd[1]: Started sshd@6-10.128.0.57:22-147.75.109.163:54938.service - OpenSSH per-connection server daemon (147.75.109.163:54938). Jan 29 16:23:27.741392 sshd[1750]: Accepted publickey for core from 147.75.109.163 port 54938 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:23:27.743293 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:27.750411 systemd-logind[1483]: New session 7 of user core. Jan 29 16:23:27.764870 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:23:27.921868 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:23:27.922370 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:23:28.394200 (dockerd)[1770]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:23:28.394454 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:23:28.834404 dockerd[1770]: time="2025-01-29T16:23:28.834201457Z" level=info msg="Starting up" Jan 29 16:23:29.058728 dockerd[1770]: time="2025-01-29T16:23:29.058639726Z" level=info msg="Loading containers: start." Jan 29 16:23:29.277610 kernel: Initializing XFRM netlink socket Jan 29 16:23:29.400846 systemd-networkd[1350]: docker0: Link UP Jan 29 16:23:29.451124 dockerd[1770]: time="2025-01-29T16:23:29.451065078Z" level=info msg="Loading containers: done." Jan 29 16:23:29.469992 dockerd[1770]: time="2025-01-29T16:23:29.469922177Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:23:29.470232 dockerd[1770]: time="2025-01-29T16:23:29.470054092Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:23:29.470232 dockerd[1770]: time="2025-01-29T16:23:29.470206658Z" level=info msg="Daemon has completed initialization" Jan 29 16:23:29.513772 dockerd[1770]: time="2025-01-29T16:23:29.513499660Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:23:29.515658 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:23:30.610234 containerd[1500]: time="2025-01-29T16:23:30.610180908Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 16:23:31.162920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132191917.mount: Deactivated successfully. Jan 29 16:23:32.098952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:23:32.106374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:32.391833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:32.394831 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:23:32.495830 kubelet[2025]: E0129 16:23:32.495692 2025 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:23:32.501650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:23:32.502098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:23:32.503063 systemd[1]: kubelet.service: Consumed 221ms CPU time, 97.9M memory peak. Jan 29 16:23:33.158041 containerd[1500]: time="2025-01-29T16:23:33.157963133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:33.159437 containerd[1500]: time="2025-01-29T16:23:33.159363218Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32683640" Jan 29 16:23:33.161025 containerd[1500]: time="2025-01-29T16:23:33.160953149Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:33.165340 containerd[1500]: time="2025-01-29T16:23:33.165266386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:33.167461 containerd[1500]: time="2025-01-29T16:23:33.167160660Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.556893837s" Jan 29 16:23:33.167461 containerd[1500]: time="2025-01-29T16:23:33.167221345Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 16:23:33.201083 containerd[1500]: time="2025-01-29T16:23:33.201039784Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 16:23:34.864150 containerd[1500]: time="2025-01-29T16:23:34.864077081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:34.865747 containerd[1500]: time="2025-01-29T16:23:34.865699168Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29607679" Jan 29 16:23:34.867114 containerd[1500]: time="2025-01-29T16:23:34.867069892Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:34.873280 containerd[1500]: time="2025-01-29T16:23:34.873205324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:34.875149 containerd[1500]: time="2025-01-29T16:23:34.874660600Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.673569619s" Jan 29 16:23:34.875149 containerd[1500]: time="2025-01-29T16:23:34.874705423Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 16:23:34.906834 containerd[1500]: time="2025-01-29T16:23:34.906784089Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 16:23:36.024244 containerd[1500]: time="2025-01-29T16:23:36.024152188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:36.025804 containerd[1500]: time="2025-01-29T16:23:36.025726668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17784980" Jan 29 16:23:36.027657 containerd[1500]: time="2025-01-29T16:23:36.027582356Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:36.031645 containerd[1500]: time="2025-01-29T16:23:36.031546961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:36.033317 containerd[1500]: time="2025-01-29T16:23:36.033101326Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.126263552s" Jan 29 16:23:36.033317 containerd[1500]: time="2025-01-29T16:23:36.033146805Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 16:23:36.064235 containerd[1500]: time="2025-01-29T16:23:36.064187152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:23:37.337377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200093004.mount: Deactivated successfully. Jan 29 16:23:37.905396 containerd[1500]: time="2025-01-29T16:23:37.905320597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:37.906874 containerd[1500]: time="2025-01-29T16:23:37.906798933Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29060232" Jan 29 16:23:37.908662 containerd[1500]: time="2025-01-29T16:23:37.908590876Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:37.911524 containerd[1500]: time="2025-01-29T16:23:37.911449043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:37.912588 containerd[1500]: time="2025-01-29T16:23:37.912384203Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.84814227s" Jan 29 16:23:37.912588 containerd[1500]: time="2025-01-29T16:23:37.912432534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 16:23:37.943881 containerd[1500]: time="2025-01-29T16:23:37.943832691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:23:38.378969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412927685.mount: Deactivated successfully. Jan 29 16:23:39.448590 containerd[1500]: time="2025-01-29T16:23:39.448500638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:39.450498 containerd[1500]: time="2025-01-29T16:23:39.450424864Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 29 16:23:39.451742 containerd[1500]: time="2025-01-29T16:23:39.451664089Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:39.455733 containerd[1500]: time="2025-01-29T16:23:39.455664975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:39.458015 containerd[1500]: time="2025-01-29T16:23:39.457449273Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.513563008s" Jan 29 16:23:39.458015 containerd[1500]: time="2025-01-29T16:23:39.457496877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:23:39.490220 containerd[1500]: time="2025-01-29T16:23:39.490175563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 16:23:39.882551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986039118.mount: Deactivated successfully. Jan 29 16:23:39.888408 containerd[1500]: time="2025-01-29T16:23:39.888340169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:39.889599 containerd[1500]: time="2025-01-29T16:23:39.889514163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 29 16:23:39.891216 containerd[1500]: time="2025-01-29T16:23:39.891147288Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:39.894481 containerd[1500]: time="2025-01-29T16:23:39.894414168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:39.895602 containerd[1500]: time="2025-01-29T16:23:39.895407588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 405.079761ms" Jan 29 16:23:39.895602 containerd[1500]: time="2025-01-29T16:23:39.895453056Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 16:23:39.928798 containerd[1500]: time="2025-01-29T16:23:39.928492431Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 16:23:40.339079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574550714.mount: Deactivated successfully. Jan 29 16:23:42.573058 containerd[1500]: time="2025-01-29T16:23:42.572977595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:42.574797 containerd[1500]: time="2025-01-29T16:23:42.574734828Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Jan 29 16:23:42.576276 containerd[1500]: time="2025-01-29T16:23:42.576197099Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:42.582044 containerd[1500]: time="2025-01-29T16:23:42.581230166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:42.583065 containerd[1500]: time="2025-01-29T16:23:42.583017565Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.654471194s" Jan 29 16:23:42.583188 containerd[1500]: time="2025-01-29T16:23:42.583069079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 16:23:42.598974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:23:42.610500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:42.890354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:42.902193 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:23:42.985962 kubelet[2185]: E0129 16:23:42.985912 2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:23:42.989769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:23:42.990007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:23:42.990737 systemd[1]: kubelet.service: Consumed 201ms CPU time, 97.3M memory peak. Jan 29 16:23:46.895637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:46.895970 systemd[1]: kubelet.service: Consumed 201ms CPU time, 97.3M memory peak. Jan 29 16:23:46.911103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:46.952631 systemd[1]: Reload requested from client PID 2247 ('systemctl') (unit session-7.scope)... Jan 29 16:23:46.952660 systemd[1]: Reloading... Jan 29 16:23:47.102597 zram_generator::config[2289]: No configuration found. Jan 29 16:23:47.277057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:23:47.419817 systemd[1]: Reloading finished in 466 ms. Jan 29 16:23:47.506782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:47.518178 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:23:47.520100 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:47.520777 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:23:47.521127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:47.521190 systemd[1]: kubelet.service: Consumed 125ms CPU time, 82.6M memory peak. Jan 29 16:23:47.527957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:47.759160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:47.767004 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:23:47.826425 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:23:47.826425 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:23:47.826425 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:23:47.826425 kubelet[2347]: I0129 16:23:47.826187 2347 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:23:48.460339 kubelet[2347]: I0129 16:23:48.460278 2347 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:23:48.460339 kubelet[2347]: I0129 16:23:48.460320 2347 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:23:48.462613 kubelet[2347]: I0129 16:23:48.460986 2347 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:23:48.495651 kubelet[2347]: I0129 16:23:48.495607 2347 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:23:48.495980 kubelet[2347]: E0129 16:23:48.495680 2347 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.513694 kubelet[2347]: I0129 16:23:48.513631 2347 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:23:48.514055 kubelet[2347]: I0129 16:23:48.513990 2347 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:23:48.514367 kubelet[2347]: I0129 16:23:48.514041 2347 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:23:48.515403 kubelet[2347]: I0129 16:23:48.515376 2347 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:23:48.515526 kubelet[2347]: I0129 16:23:48.515413 2347 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:23:48.515644 kubelet[2347]: I0129 16:23:48.515621 2347 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:23:48.517335 kubelet[2347]: I0129 16:23:48.517009 2347 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:23:48.517335 kubelet[2347]: I0129 16:23:48.517039 2347 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:23:48.517335 kubelet[2347]: I0129 16:23:48.517073 2347 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:23:48.517335 kubelet[2347]: I0129 16:23:48.517100 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:23:48.523994 kubelet[2347]: W0129 16:23:48.523916 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.523994 kubelet[2347]: E0129 16:23:48.523985 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.525159 kubelet[2347]: W0129 16:23:48.524323 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.525159 kubelet[2347]: E0129 16:23:48.524387 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.525159 kubelet[2347]: I0129 16:23:48.524940 2347 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:23:48.528910 kubelet[2347]: I0129 16:23:48.527353 2347 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:23:48.528910 kubelet[2347]: W0129 16:23:48.527477 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:23:48.528910 kubelet[2347]: I0129 16:23:48.528694 2347 server.go:1264] "Started kubelet" Jan 29 16:23:48.530824 kubelet[2347]: I0129 16:23:48.530762 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:23:48.532380 kubelet[2347]: I0129 16:23:48.532101 2347 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:23:48.538875 kubelet[2347]: I0129 16:23:48.538013 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:23:48.538875 kubelet[2347]: I0129 16:23:48.538423 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:23:48.538875 kubelet[2347]: I0129 16:23:48.538738 2347 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:23:48.543002 kubelet[2347]: E0129 16:23:48.542842 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal.181f366e4b768b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,UID:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,},FirstTimestamp:2025-01-29 16:23:48.528663369 +0000 UTC m=+0.755784980,LastTimestamp:2025-01-29 16:23:48.528663369 +0000 UTC m=+0.755784980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,}" Jan 29 16:23:48.547450 kubelet[2347]: I0129 16:23:48.547059 2347 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:23:48.548135 kubelet[2347]: E0129 16:23:48.548084 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.57:6443: connect: connection refused" interval="200ms" Jan 29 16:23:48.548528 kubelet[2347]: I0129 16:23:48.548506 2347 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:23:48.548807 kubelet[2347]: I0129 16:23:48.548781 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:23:48.552493 kubelet[2347]: I0129 16:23:48.552463 2347 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:23:48.552631 kubelet[2347]: I0129 16:23:48.552570 2347 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:23:48.555048 kubelet[2347]: I0129 16:23:48.555027 2347 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:23:48.575824 kubelet[2347]: I0129 16:23:48.575748 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:23:48.577350 kubelet[2347]: I0129 16:23:48.577301 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:23:48.577350 kubelet[2347]: I0129 16:23:48.577330 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:23:48.577350 kubelet[2347]: I0129 16:23:48.577354 2347 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:23:48.577602 kubelet[2347]: E0129 16:23:48.577415 2347 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:23:48.586588 kubelet[2347]: E0129 16:23:48.586527 2347 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:23:48.587045 kubelet[2347]: W0129 16:23:48.586882 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.587045 kubelet[2347]: E0129 16:23:48.586959 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.587705 kubelet[2347]: W0129 16:23:48.587646 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.587705 kubelet[2347]: E0129 16:23:48.587705 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:48.602879 kubelet[2347]: I0129 16:23:48.602755 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:23:48.602879 kubelet[2347]: I0129 16:23:48.602786 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:23:48.602879 kubelet[2347]: I0129 16:23:48.602880 2347 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:23:48.606770 kubelet[2347]: I0129 16:23:48.606627 2347 policy_none.go:49] "None policy: Start" Jan 29 16:23:48.607672 kubelet[2347]: I0129 16:23:48.607599 2347 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:23:48.607672 kubelet[2347]: I0129 16:23:48.607664 2347 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:23:48.620789 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:23:48.640291 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:23:48.645247 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:23:48.651875 kubelet[2347]: I0129 16:23:48.651836 2347 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:23:48.652208 kubelet[2347]: I0129 16:23:48.652137 2347 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:23:48.652413 kubelet[2347]: I0129 16:23:48.652328 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:23:48.655368 kubelet[2347]: E0129 16:23:48.655316 2347 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" not found" Jan 29 16:23:48.658596 kubelet[2347]: I0129 16:23:48.658513 2347 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.659198 kubelet[2347]: E0129 16:23:48.659153 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.57:6443/api/v1/nodes\": dial tcp 10.128.0.57:6443: connect: connection refused" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.678548 kubelet[2347]: I0129 16:23:48.678432 2347 topology_manager.go:215] "Topology Admit Handler" podUID="086f8bc3464f3a27348a00bcfa9fe106" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.686193 kubelet[2347]: I0129 16:23:48.686115 2347 topology_manager.go:215] "Topology Admit Handler" podUID="d3b9c194898cf9c2e4fb3a646477ca34" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.692742 kubelet[2347]: I0129 16:23:48.692661 2347 topology_manager.go:215] "Topology Admit Handler" podUID="4ecae98e6307a4bd8310a797bc4f622f" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.701988 systemd[1]: Created slice kubepods-burstable-pod086f8bc3464f3a27348a00bcfa9fe106.slice - libcontainer container kubepods-burstable-pod086f8bc3464f3a27348a00bcfa9fe106.slice. Jan 29 16:23:48.719888 systemd[1]: Created slice kubepods-burstable-podd3b9c194898cf9c2e4fb3a646477ca34.slice - libcontainer container kubepods-burstable-podd3b9c194898cf9c2e4fb3a646477ca34.slice. Jan 29 16:23:48.730049 systemd[1]: Created slice kubepods-burstable-pod4ecae98e6307a4bd8310a797bc4f622f.slice - libcontainer container kubepods-burstable-pod4ecae98e6307a4bd8310a797bc4f622f.slice. Jan 29 16:23:48.749500 kubelet[2347]: E0129 16:23:48.749410 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.57:6443: connect: connection refused" interval="400ms" Jan 29 16:23:48.753052 kubelet[2347]: I0129 16:23:48.752989 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/086f8bc3464f3a27348a00bcfa9fe106-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"086f8bc3464f3a27348a00bcfa9fe106\") " pod="kube-system/kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.753052 kubelet[2347]: I0129 16:23:48.753038 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.753436 kubelet[2347]: I0129 16:23:48.753081 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.753436 kubelet[2347]: I0129 16:23:48.753108 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ecae98e6307a4bd8310a797bc4f622f-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"4ecae98e6307a4bd8310a797bc4f622f\") " pod="kube-system/kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.753436 kubelet[2347]: I0129 16:23:48.753137 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/086f8bc3464f3a27348a00bcfa9fe106-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"086f8bc3464f3a27348a00bcfa9fe106\") " pod="kube-system/kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.753436 kubelet[2347]: I0129 16:23:48.753166 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/086f8bc3464f3a27348a00bcfa9fe106-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"086f8bc3464f3a27348a00bcfa9fe106\") " pod="kube-system/kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.753653 kubelet[2347]: I0129 16:23:48.753228 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.753653 kubelet[2347]: I0129 16:23:48.753262 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.753653 kubelet[2347]: I0129 16:23:48.753293 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.868029 kubelet[2347]: I0129 16:23:48.867959 2347 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:48.868706 kubelet[2347]: E0129 16:23:48.868475 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.57:6443/api/v1/nodes\": dial tcp 10.128.0.57:6443: connect: connection refused" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:49.016878 containerd[1500]: time="2025-01-29T16:23:49.016719163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,Uid:086f8bc3464f3a27348a00bcfa9fe106,Namespace:kube-system,Attempt:0,}" Jan 29 16:23:49.029774 containerd[1500]: time="2025-01-29T16:23:49.029695185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,Uid:d3b9c194898cf9c2e4fb3a646477ca34,Namespace:kube-system,Attempt:0,}" Jan 29 16:23:49.034814 containerd[1500]: time="2025-01-29T16:23:49.034670192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,Uid:4ecae98e6307a4bd8310a797bc4f622f,Namespace:kube-system,Attempt:0,}" Jan 29 16:23:49.123801 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 16:23:49.150901 kubelet[2347]: E0129 16:23:49.150840 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.57:6443: connect: connection refused" interval="800ms" Jan 29 16:23:49.273902 kubelet[2347]: I0129 16:23:49.273763 2347 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:49.274220 kubelet[2347]: E0129 16:23:49.274160 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.57:6443/api/v1/nodes\": dial tcp 10.128.0.57:6443: connect: connection refused" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:49.471868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998967927.mount: Deactivated successfully. Jan 29 16:23:49.480278 containerd[1500]: time="2025-01-29T16:23:49.480216918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:49.484012 containerd[1500]: time="2025-01-29T16:23:49.483940667Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 29 16:23:49.485247 containerd[1500]: time="2025-01-29T16:23:49.485160906Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:49.486386 containerd[1500]: time="2025-01-29T16:23:49.486321002Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:49.489258 containerd[1500]: time="2025-01-29T16:23:49.489097479Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:49.490759 containerd[1500]: time="2025-01-29T16:23:49.490617331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:23:49.492209 containerd[1500]: time="2025-01-29T16:23:49.492040441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:23:49.493517 containerd[1500]: time="2025-01-29T16:23:49.493405829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:49.496836 containerd[1500]: time="2025-01-29T16:23:49.496138419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 461.353603ms" Jan 29 16:23:49.498519 containerd[1500]: time="2025-01-29T16:23:49.498475161Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 468.631711ms" Jan 29 16:23:49.499283 containerd[1500]: time="2025-01-29T16:23:49.499225916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.368735ms" Jan 29 16:23:49.688228 containerd[1500]: time="2025-01-29T16:23:49.685813426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:49.688228 containerd[1500]: time="2025-01-29T16:23:49.687967552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:49.688228 containerd[1500]: time="2025-01-29T16:23:49.687991772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:49.688228 containerd[1500]: time="2025-01-29T16:23:49.688124071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:49.692586 containerd[1500]: time="2025-01-29T16:23:49.692040591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:49.692755 containerd[1500]: time="2025-01-29T16:23:49.692445817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:49.692755 containerd[1500]: time="2025-01-29T16:23:49.692631832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:49.694580 containerd[1500]: time="2025-01-29T16:23:49.694499174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:49.697757 containerd[1500]: time="2025-01-29T16:23:49.697639665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:49.697871 containerd[1500]: time="2025-01-29T16:23:49.697800787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:49.697934 containerd[1500]: time="2025-01-29T16:23:49.697884024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:49.698330 containerd[1500]: time="2025-01-29T16:23:49.698273502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:49.709428 kubelet[2347]: W0129 16:23:49.709325 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:49.709618 kubelet[2347]: E0129 16:23:49.709442 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:49.739828 systemd[1]: Started cri-containerd-1b8cd5ec3ad57e31a5246b039e7a36c402ecb6de4b7882942f1b87ef4b5e176d.scope - libcontainer container 1b8cd5ec3ad57e31a5246b039e7a36c402ecb6de4b7882942f1b87ef4b5e176d. Jan 29 16:23:49.747764 systemd[1]: Started cri-containerd-611c484f34db4dd132b6f365f302f98b4c7a9c5480516ab1301fbc7b72674a7d.scope - libcontainer container 611c484f34db4dd132b6f365f302f98b4c7a9c5480516ab1301fbc7b72674a7d. Jan 29 16:23:49.751063 systemd[1]: Started cri-containerd-962e5b56cf39eacb8ce51d526845979ccfe560aea5a99386c075a65982967693.scope - libcontainer container 962e5b56cf39eacb8ce51d526845979ccfe560aea5a99386c075a65982967693. Jan 29 16:23:49.845634 containerd[1500]: time="2025-01-29T16:23:49.845420836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,Uid:4ecae98e6307a4bd8310a797bc4f622f,Namespace:kube-system,Attempt:0,} returns sandbox id \"611c484f34db4dd132b6f365f302f98b4c7a9c5480516ab1301fbc7b72674a7d\"" Jan 29 16:23:49.856103 kubelet[2347]: E0129 16:23:49.855957 2347 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-21291" Jan 29 16:23:49.861168 containerd[1500]: time="2025-01-29T16:23:49.861108990Z" level=info msg="CreateContainer within sandbox \"611c484f34db4dd132b6f365f302f98b4c7a9c5480516ab1301fbc7b72674a7d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:23:49.870152 containerd[1500]: time="2025-01-29T16:23:49.870102649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,Uid:d3b9c194898cf9c2e4fb3a646477ca34,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b8cd5ec3ad57e31a5246b039e7a36c402ecb6de4b7882942f1b87ef4b5e176d\"" Jan 29 16:23:49.873907 kubelet[2347]: W0129 16:23:49.873696 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:49.874931 kubelet[2347]: E0129 16:23:49.873932 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:49.875026 kubelet[2347]: E0129 16:23:49.874811 2347 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flat" Jan 29 16:23:49.878844 containerd[1500]: time="2025-01-29T16:23:49.878801683Z" level=info msg="CreateContainer within sandbox \"1b8cd5ec3ad57e31a5246b039e7a36c402ecb6de4b7882942f1b87ef4b5e176d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:23:49.883196 containerd[1500]: time="2025-01-29T16:23:49.883049535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,Uid:086f8bc3464f3a27348a00bcfa9fe106,Namespace:kube-system,Attempt:0,} returns sandbox id \"962e5b56cf39eacb8ce51d526845979ccfe560aea5a99386c075a65982967693\"" Jan 29 16:23:49.887333 kubelet[2347]: E0129 16:23:49.887293 2347 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-21291" Jan 29 16:23:49.889524 containerd[1500]: time="2025-01-29T16:23:49.889489331Z" level=info msg="CreateContainer within sandbox \"962e5b56cf39eacb8ce51d526845979ccfe560aea5a99386c075a65982967693\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:23:49.894206 containerd[1500]: time="2025-01-29T16:23:49.894149901Z" level=info msg="CreateContainer within sandbox \"611c484f34db4dd132b6f365f302f98b4c7a9c5480516ab1301fbc7b72674a7d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7497840cd46732552b9c954af8a327c852afbe6517693ad077a82a8281973ef6\"" Jan 29 16:23:49.894959 containerd[1500]: time="2025-01-29T16:23:49.894920997Z" level=info msg="StartContainer for \"7497840cd46732552b9c954af8a327c852afbe6517693ad077a82a8281973ef6\"" Jan 29 16:23:49.913768 containerd[1500]: time="2025-01-29T16:23:49.913595829Z" level=info msg="CreateContainer within sandbox \"1b8cd5ec3ad57e31a5246b039e7a36c402ecb6de4b7882942f1b87ef4b5e176d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8475df16864b20e7f159c6beade92f2c81c8e2a6ee4319ff309ab4ea4ce5f960\"" Jan 29 16:23:49.914654 containerd[1500]: time="2025-01-29T16:23:49.914620169Z" level=info msg="StartContainer for \"8475df16864b20e7f159c6beade92f2c81c8e2a6ee4319ff309ab4ea4ce5f960\"" Jan 29 16:23:49.920789 containerd[1500]: time="2025-01-29T16:23:49.920750177Z" level=info msg="CreateContainer within sandbox \"962e5b56cf39eacb8ce51d526845979ccfe560aea5a99386c075a65982967693\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"62b32a1da7150461e58ddb4c346e357c610854e32046f5536922fe6765249816\"" Jan 29 16:23:49.923588 containerd[1500]: time="2025-01-29T16:23:49.921899207Z" level=info msg="StartContainer for \"62b32a1da7150461e58ddb4c346e357c610854e32046f5536922fe6765249816\"" Jan 29 16:23:49.932069 kubelet[2347]: W0129 16:23:49.932001 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:49.932342 kubelet[2347]: E0129 16:23:49.932311 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:49.941968 kubelet[2347]: W0129 16:23:49.941841 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:49.942771 kubelet[2347]: E0129 16:23:49.942746 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.57:6443: connect: connection refused Jan 29 16:23:49.944815 systemd[1]: Started cri-containerd-7497840cd46732552b9c954af8a327c852afbe6517693ad077a82a8281973ef6.scope - libcontainer container 7497840cd46732552b9c954af8a327c852afbe6517693ad077a82a8281973ef6. Jan 29 16:23:49.951871 kubelet[2347]: E0129 16:23:49.951787 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.57:6443: connect: connection refused" interval="1.6s" Jan 29 16:23:49.989797 systemd[1]: Started cri-containerd-62b32a1da7150461e58ddb4c346e357c610854e32046f5536922fe6765249816.scope - libcontainer container 62b32a1da7150461e58ddb4c346e357c610854e32046f5536922fe6765249816. Jan 29 16:23:50.001098 systemd[1]: Started cri-containerd-8475df16864b20e7f159c6beade92f2c81c8e2a6ee4319ff309ab4ea4ce5f960.scope - libcontainer container 8475df16864b20e7f159c6beade92f2c81c8e2a6ee4319ff309ab4ea4ce5f960. Jan 29 16:23:50.081595 kubelet[2347]: I0129 16:23:50.080213 2347 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:50.081595 kubelet[2347]: E0129 16:23:50.080679 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.57:6443/api/v1/nodes\": dial tcp 10.128.0.57:6443: connect: connection refused" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:50.088355 containerd[1500]: time="2025-01-29T16:23:50.088150956Z" level=info msg="StartContainer for \"62b32a1da7150461e58ddb4c346e357c610854e32046f5536922fe6765249816\" returns successfully" Jan 29 16:23:50.115609 containerd[1500]: time="2025-01-29T16:23:50.114740420Z" level=info msg="StartContainer for \"8475df16864b20e7f159c6beade92f2c81c8e2a6ee4319ff309ab4ea4ce5f960\" returns successfully" Jan 29 16:23:50.126694 containerd[1500]: time="2025-01-29T16:23:50.126650372Z" level=info msg="StartContainer for \"7497840cd46732552b9c954af8a327c852afbe6517693ad077a82a8281973ef6\" returns successfully" Jan 29 16:23:51.696052 kubelet[2347]: I0129 16:23:51.696003 2347 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:53.838399 kubelet[2347]: E0129 16:23:53.838338 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" not found" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:53.872626 kubelet[2347]: I0129 16:23:53.872552 2347 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:53.880615 kubelet[2347]: E0129 16:23:53.880328 2347 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal.181f366e4b768b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,UID:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,},FirstTimestamp:2025-01-29 16:23:48.528663369 +0000 UTC m=+0.755784980,LastTimestamp:2025-01-29 16:23:48.528663369 +0000 UTC m=+0.755784980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,}" Jan 29 16:23:53.980025 kubelet[2347]: E0129 16:23:53.979870 2347 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal.181f366e4ee93e40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,UID:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,},FirstTimestamp:2025-01-29 16:23:48.586511936 +0000 UTC m=+0.813633551,LastTimestamp:2025-01-29 16:23:48.586511936 +0000 UTC m=+0.813633551,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal,}" Jan 29 16:23:54.520381 kubelet[2347]: I0129 16:23:54.520320 2347 apiserver.go:52] "Watching apiserver" Jan 29 16:23:54.553532 kubelet[2347]: I0129 16:23:54.553468 2347 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:23:55.728052 systemd[1]: Reload requested from client PID 2625 ('systemctl') (unit session-7.scope)... Jan 29 16:23:55.728075 systemd[1]: Reloading... Jan 29 16:23:55.897596 zram_generator::config[2673]: No configuration found. Jan 29 16:23:56.050852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:23:56.223265 systemd[1]: Reloading finished in 494 ms. Jan 29 16:23:56.261681 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:56.276520 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:23:56.276878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:56.276976 systemd[1]: kubelet.service: Consumed 1.276s CPU time, 115.4M memory peak. Jan 29 16:23:56.284180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:56.559649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:56.573284 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:23:56.678888 kubelet[2718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:23:56.679416 kubelet[2718]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:23:56.679416 kubelet[2718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:23:56.679416 kubelet[2718]: I0129 16:23:56.679365 2718 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:23:56.692841 kubelet[2718]: I0129 16:23:56.692677 2718 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:23:56.692841 kubelet[2718]: I0129 16:23:56.692725 2718 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:23:56.693790 kubelet[2718]: I0129 16:23:56.693593 2718 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:23:56.696545 kubelet[2718]: I0129 16:23:56.695767 2718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:23:56.699090 kubelet[2718]: I0129 16:23:56.698673 2718 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:23:56.726843 kubelet[2718]: I0129 16:23:56.726314 2718 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:23:56.727630 kubelet[2718]: I0129 16:23:56.727533 2718 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:23:56.727969 kubelet[2718]: I0129 16:23:56.727610 2718 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:23:56.728182 kubelet[2718]: I0129 16:23:56.727995 2718 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:23:56.728182 kubelet[2718]: I0129 16:23:56.728033 2718 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:23:56.728182 kubelet[2718]: I0129 16:23:56.728104 2718 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:23:56.728355 kubelet[2718]: I0129 16:23:56.728252 2718 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:23:56.728355 kubelet[2718]: I0129 16:23:56.728284 2718 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:23:56.728355 kubelet[2718]: I0129 16:23:56.728326 2718 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:23:56.728355 kubelet[2718]: I0129 16:23:56.728352 2718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:23:56.734603 kubelet[2718]: I0129 16:23:56.733517 2718 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:23:56.734603 kubelet[2718]: I0129 16:23:56.733951 2718 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:23:56.734834 kubelet[2718]: I0129 16:23:56.734793 2718 server.go:1264] "Started kubelet" Jan 29 16:23:56.743588 kubelet[2718]: I0129 16:23:56.741926 2718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:23:56.756182 kubelet[2718]: I0129 16:23:56.756116 2718 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:23:56.761585 kubelet[2718]: I0129 16:23:56.758877 2718 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:23:56.767219 sudo[2731]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:23:56.767785 sudo[2731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:23:56.770250 kubelet[2718]: I0129 16:23:56.770177 2718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:23:56.770657 kubelet[2718]: I0129 16:23:56.770637 2718 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:23:56.774310 kubelet[2718]: I0129 16:23:56.773676 2718 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:23:56.782350 kubelet[2718]: I0129 16:23:56.782320 2718 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:23:56.782598 kubelet[2718]: I0129 16:23:56.782542 2718 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:23:56.791293 kubelet[2718]: I0129 16:23:56.791246 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:23:56.796360 kubelet[2718]: I0129 16:23:56.796329 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:23:56.796792 kubelet[2718]: I0129 16:23:56.796497 2718 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:23:56.796792 kubelet[2718]: I0129 16:23:56.796525 2718 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:23:56.812962 kubelet[2718]: E0129 16:23:56.812047 2718 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:23:56.832644 kubelet[2718]: E0129 16:23:56.832222 2718 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:23:56.835014 kubelet[2718]: I0129 16:23:56.834843 2718 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:23:56.835014 kubelet[2718]: I0129 16:23:56.834965 2718 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:23:56.841288 kubelet[2718]: I0129 16:23:56.841255 2718 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:23:56.889893 kubelet[2718]: I0129 16:23:56.889845 2718 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:56.914247 kubelet[2718]: E0129 16:23:56.914209 2718 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:23:56.918053 kubelet[2718]: I0129 16:23:56.918017 2718 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:56.918209 kubelet[2718]: I0129 16:23:56.918123 2718 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:56.995393 kubelet[2718]: I0129 16:23:56.994719 2718 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:23:56.995393 kubelet[2718]: I0129 16:23:56.994747 2718 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:23:56.995393 kubelet[2718]: I0129 16:23:56.994778 2718 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:23:56.997361 kubelet[2718]: I0129 16:23:56.997021 2718 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:23:56.997361 kubelet[2718]: I0129 16:23:56.997049 2718 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:23:56.997361 kubelet[2718]: I0129 16:23:56.997087 2718 policy_none.go:49] "None policy: Start" Jan 29 16:23:56.999643 kubelet[2718]: I0129 16:23:56.999080 2718 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:23:56.999643 kubelet[2718]: I0129 16:23:56.999116 2718 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:23:56.999643 kubelet[2718]: I0129 16:23:56.999413 2718 state_mem.go:75] "Updated machine memory state" Jan 29 16:23:57.013070 kubelet[2718]: I0129 16:23:57.013030 2718 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:23:57.014041 kubelet[2718]: I0129 16:23:57.013981 2718 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:23:57.015634 kubelet[2718]: I0129 16:23:57.014555 2718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:23:57.116410 kubelet[2718]: I0129 16:23:57.115424 2718 topology_manager.go:215] "Topology Admit Handler" podUID="4ecae98e6307a4bd8310a797bc4f622f" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.116410 kubelet[2718]: I0129 16:23:57.115627 2718 topology_manager.go:215] "Topology Admit Handler" podUID="086f8bc3464f3a27348a00bcfa9fe106" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.116410 kubelet[2718]: I0129 16:23:57.115682 2718 topology_manager.go:215] "Topology Admit Handler" podUID="d3b9c194898cf9c2e4fb3a646477ca34" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.136443 kubelet[2718]: W0129 16:23:57.136398 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 16:23:57.139889 kubelet[2718]: W0129 16:23:57.139860 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 16:23:57.141778 kubelet[2718]: W0129 16:23:57.141604 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 16:23:57.186977 kubelet[2718]: I0129 16:23:57.186820 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ecae98e6307a4bd8310a797bc4f622f-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"4ecae98e6307a4bd8310a797bc4f622f\") " pod="kube-system/kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.186977 kubelet[2718]: I0129 16:23:57.186919 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/086f8bc3464f3a27348a00bcfa9fe106-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"086f8bc3464f3a27348a00bcfa9fe106\") " pod="kube-system/kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.187738 kubelet[2718]: I0129 16:23:57.187421 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.187738 kubelet[2718]: I0129 16:23:57.187514 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.187738 kubelet[2718]: I0129 16:23:57.187609 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.187738 kubelet[2718]: I0129 16:23:57.187681 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.188369 kubelet[2718]: I0129 16:23:57.187714 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/086f8bc3464f3a27348a00bcfa9fe106-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"086f8bc3464f3a27348a00bcfa9fe106\") " pod="kube-system/kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.188369 kubelet[2718]: I0129 16:23:57.188229 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/086f8bc3464f3a27348a00bcfa9fe106-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"086f8bc3464f3a27348a00bcfa9fe106\") " pod="kube-system/kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.188369 kubelet[2718]: I0129 16:23:57.188284 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3b9c194898cf9c2e4fb3a646477ca34-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" (UID: \"d3b9c194898cf9c2e4fb3a646477ca34\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.639786 sudo[2731]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:57.729633 kubelet[2718]: I0129 16:23:57.729583 2718 apiserver.go:52] "Watching apiserver" Jan 29 16:23:57.783463 kubelet[2718]: I0129 16:23:57.783385 2718 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:23:57.918616 kubelet[2718]: W0129 16:23:57.918470 2718 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 16:23:57.918616 kubelet[2718]: E0129 16:23:57.918577 2718 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" Jan 29 16:23:57.991297 kubelet[2718]: I0129 16:23:57.991196 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" podStartSLOduration=0.991172615 podStartE2EDuration="991.172615ms" podCreationTimestamp="2025-01-29 16:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:23:57.959896682 +0000 UTC m=+1.377953797" watchObservedRunningTime="2025-01-29 16:23:57.991172615 +0000 UTC m=+1.409229723" Jan 29 16:23:58.034932 kubelet[2718]: I0129 16:23:58.034860 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" podStartSLOduration=1.034812641 podStartE2EDuration="1.034812641s" podCreationTimestamp="2025-01-29 16:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:23:58.032320763 +0000 UTC m=+1.450377882" watchObservedRunningTime="2025-01-29 16:23:58.034812641 +0000 UTC m=+1.452869750" Jan 29 16:23:58.035204 kubelet[2718]: I0129 16:23:58.035103 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" podStartSLOduration=1.03507547 podStartE2EDuration="1.03507547s" podCreationTimestamp="2025-01-29 16:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:23:57.993200538 +0000 UTC m=+1.411257652" watchObservedRunningTime="2025-01-29 16:23:58.03507547 +0000 UTC m=+1.453132586" Jan 29 16:23:59.940413 sudo[1753]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:59.983531 sshd[1752]: Connection closed by 147.75.109.163 port 54938 Jan 29 16:23:59.984510 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:59.989158 systemd[1]: sshd@6-10.128.0.57:22-147.75.109.163:54938.service: Deactivated successfully. Jan 29 16:23:59.992462 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:23:59.993055 systemd[1]: session-7.scope: Consumed 7.811s CPU time, 294M memory peak. Jan 29 16:23:59.996184 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:23:59.998164 systemd-logind[1483]: Removed session 7. Jan 29 16:24:03.651199 update_engine[1487]: I20250129 16:24:03.651114 1487 update_attempter.cc:509] Updating boot flags... Jan 29 16:24:03.724592 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2797) Jan 29 16:24:03.910012 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2796) Jan 29 16:24:04.019601 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2796) Jan 29 16:24:11.083287 kubelet[2718]: I0129 16:24:11.083052 2718 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:24:11.086585 containerd[1500]: time="2025-01-29T16:24:11.086081854Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:24:11.087076 kubelet[2718]: I0129 16:24:11.086462 2718 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:24:11.207055 kubelet[2718]: I0129 16:24:11.206995 2718 topology_manager.go:215] "Topology Admit Handler" podUID="7802141a-6f7c-4695-92f1-7e02e5695f67" podNamespace="kube-system" podName="cilium-operator-599987898-z578m" Jan 29 16:24:11.218926 kubelet[2718]: W0129 16:24:11.218778 2718 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal' and this object Jan 29 16:24:11.218926 kubelet[2718]: E0129 16:24:11.218839 2718 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal' and this object Jan 29 16:24:11.219166 kubelet[2718]: W0129 16:24:11.218942 2718 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal' and this object Jan 29 16:24:11.219166 kubelet[2718]: E0129 16:24:11.218963 2718 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal' and this object Jan 29 16:24:11.224615 kubelet[2718]: I0129 16:24:11.224534 2718 topology_manager.go:215] "Topology Admit Handler" podUID="bdd3582b-2d91-4490-934a-4b8a757b897e" podNamespace="kube-system" podName="kube-proxy-97stm" Jan 29 16:24:11.230597 kubelet[2718]: W0129 16:24:11.228761 2718 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal' and this object Jan 29 16:24:11.231537 kubelet[2718]: E0129 16:24:11.231499 2718 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal' and this object Jan 29 16:24:11.242768 systemd[1]: Created slice kubepods-besteffort-pod7802141a_6f7c_4695_92f1_7e02e5695f67.slice - libcontainer container kubepods-besteffort-pod7802141a_6f7c_4695_92f1_7e02e5695f67.slice. Jan 29 16:24:11.268501 systemd[1]: Created slice kubepods-besteffort-podbdd3582b_2d91_4490_934a_4b8a757b897e.slice - libcontainer container kubepods-besteffort-podbdd3582b_2d91_4490_934a_4b8a757b897e.slice. Jan 29 16:24:11.276206 kubelet[2718]: I0129 16:24:11.276150 2718 topology_manager.go:215] "Topology Admit Handler" podUID="4c552780-7e29-4575-923d-f0e9ae8b5cce" podNamespace="kube-system" podName="cilium-2lwv8" Jan 29 16:24:11.280650 kubelet[2718]: I0129 16:24:11.280172 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdd3582b-2d91-4490-934a-4b8a757b897e-lib-modules\") pod \"kube-proxy-97stm\" (UID: \"bdd3582b-2d91-4490-934a-4b8a757b897e\") " pod="kube-system/kube-proxy-97stm" Jan 29 16:24:11.281708 kubelet[2718]: I0129 16:24:11.280632 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzkrn\" (UniqueName: \"kubernetes.io/projected/7802141a-6f7c-4695-92f1-7e02e5695f67-kube-api-access-tzkrn\") pod \"cilium-operator-599987898-z578m\" (UID: \"7802141a-6f7c-4695-92f1-7e02e5695f67\") " pod="kube-system/cilium-operator-599987898-z578m" Jan 29 16:24:11.281708 kubelet[2718]: I0129 16:24:11.281070 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7802141a-6f7c-4695-92f1-7e02e5695f67-cilium-config-path\") pod \"cilium-operator-599987898-z578m\" (UID: \"7802141a-6f7c-4695-92f1-7e02e5695f67\") " pod="kube-system/cilium-operator-599987898-z578m" Jan 29 16:24:11.281708 kubelet[2718]: I0129 16:24:11.281113 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bdd3582b-2d91-4490-934a-4b8a757b897e-kube-proxy\") pod \"kube-proxy-97stm\" (UID: \"bdd3582b-2d91-4490-934a-4b8a757b897e\") " pod="kube-system/kube-proxy-97stm" Jan 29 16:24:11.281708 kubelet[2718]: I0129 16:24:11.281207 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdd3582b-2d91-4490-934a-4b8a757b897e-xtables-lock\") pod \"kube-proxy-97stm\" (UID: \"bdd3582b-2d91-4490-934a-4b8a757b897e\") " pod="kube-system/kube-proxy-97stm" Jan 29 16:24:11.282003 kubelet[2718]: I0129 16:24:11.281723 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65zgh\" (UniqueName: \"kubernetes.io/projected/bdd3582b-2d91-4490-934a-4b8a757b897e-kube-api-access-65zgh\") pod \"kube-proxy-97stm\" (UID: \"bdd3582b-2d91-4490-934a-4b8a757b897e\") " pod="kube-system/kube-proxy-97stm" Jan 29 16:24:11.295614 systemd[1]: Created slice kubepods-burstable-pod4c552780_7e29_4575_923d_f0e9ae8b5cce.slice - libcontainer container kubepods-burstable-pod4c552780_7e29_4575_923d_f0e9ae8b5cce.slice. Jan 29 16:24:11.382779 kubelet[2718]: I0129 16:24:11.382726 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-bpf-maps\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384681 kubelet[2718]: I0129 16:24:11.382990 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-cgroup\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384681 kubelet[2718]: I0129 16:24:11.383024 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-host-proc-sys-net\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384681 kubelet[2718]: I0129 16:24:11.383080 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-host-proc-sys-kernel\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384681 kubelet[2718]: I0129 16:24:11.383111 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c552780-7e29-4575-923d-f0e9ae8b5cce-clustermesh-secrets\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384681 kubelet[2718]: I0129 16:24:11.383139 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvgt\" (UniqueName: \"kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-kube-api-access-6vvgt\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384998 kubelet[2718]: I0129 16:24:11.383224 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-run\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384998 kubelet[2718]: I0129 16:24:11.383251 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-hostproc\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384998 kubelet[2718]: I0129 16:24:11.383318 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cni-path\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384998 kubelet[2718]: I0129 16:24:11.383366 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-etc-cni-netd\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384998 kubelet[2718]: I0129 16:24:11.383393 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-config-path\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.384998 kubelet[2718]: I0129 16:24:11.383441 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-lib-modules\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.385301 kubelet[2718]: I0129 16:24:11.383470 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-xtables-lock\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:11.385301 kubelet[2718]: I0129 16:24:11.383498 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-hubble-tls\") pod \"cilium-2lwv8\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " pod="kube-system/cilium-2lwv8" Jan 29 16:24:12.384235 kubelet[2718]: E0129 16:24:12.384186 2718 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.384884 kubelet[2718]: E0129 16:24:12.384300 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdd3582b-2d91-4490-934a-4b8a757b897e-kube-proxy podName:bdd3582b-2d91-4490-934a-4b8a757b897e nodeName:}" failed. No retries permitted until 2025-01-29 16:24:12.884271175 +0000 UTC m=+16.302328280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bdd3582b-2d91-4490-934a-4b8a757b897e-kube-proxy") pod "kube-proxy-97stm" (UID: "bdd3582b-2d91-4490-934a-4b8a757b897e") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.403604 kubelet[2718]: E0129 16:24:12.403513 2718 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.403604 kubelet[2718]: E0129 16:24:12.403612 2718 projected.go:200] Error preparing data for projected volume kube-api-access-65zgh for pod kube-system/kube-proxy-97stm: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.403870 kubelet[2718]: E0129 16:24:12.403718 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdd3582b-2d91-4490-934a-4b8a757b897e-kube-api-access-65zgh podName:bdd3582b-2d91-4490-934a-4b8a757b897e nodeName:}" failed. No retries permitted until 2025-01-29 16:24:12.903692507 +0000 UTC m=+16.321749615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-65zgh" (UniqueName: "kubernetes.io/projected/bdd3582b-2d91-4490-934a-4b8a757b897e-kube-api-access-65zgh") pod "kube-proxy-97stm" (UID: "bdd3582b-2d91-4490-934a-4b8a757b897e") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.417190 kubelet[2718]: E0129 16:24:12.417122 2718 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.417190 kubelet[2718]: E0129 16:24:12.417176 2718 projected.go:200] Error preparing data for projected volume kube-api-access-tzkrn for pod kube-system/cilium-operator-599987898-z578m: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.417457 kubelet[2718]: E0129 16:24:12.417264 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7802141a-6f7c-4695-92f1-7e02e5695f67-kube-api-access-tzkrn podName:7802141a-6f7c-4695-92f1-7e02e5695f67 nodeName:}" failed. No retries permitted until 2025-01-29 16:24:12.917239188 +0000 UTC m=+16.335296282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tzkrn" (UniqueName: "kubernetes.io/projected/7802141a-6f7c-4695-92f1-7e02e5695f67-kube-api-access-tzkrn") pod "cilium-operator-599987898-z578m" (UID: "7802141a-6f7c-4695-92f1-7e02e5695f67") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.497016 kubelet[2718]: E0129 16:24:12.496964 2718 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.497016 kubelet[2718]: E0129 16:24:12.497007 2718 projected.go:200] Error preparing data for projected volume kube-api-access-6vvgt for pod kube-system/cilium-2lwv8: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:12.497279 kubelet[2718]: E0129 16:24:12.497110 2718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-kube-api-access-6vvgt podName:4c552780-7e29-4575-923d-f0e9ae8b5cce nodeName:}" failed. No retries permitted until 2025-01-29 16:24:12.997085538 +0000 UTC m=+16.415142642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6vvgt" (UniqueName: "kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-kube-api-access-6vvgt") pod "cilium-2lwv8" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:24:13.062061 containerd[1500]: time="2025-01-29T16:24:13.062002163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-z578m,Uid:7802141a-6f7c-4695-92f1-7e02e5695f67,Namespace:kube-system,Attempt:0,}" Jan 29 16:24:13.072947 containerd[1500]: time="2025-01-29T16:24:13.072884690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-97stm,Uid:bdd3582b-2d91-4490-934a-4b8a757b897e,Namespace:kube-system,Attempt:0,}" Jan 29 16:24:13.106153 containerd[1500]: time="2025-01-29T16:24:13.106104166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2lwv8,Uid:4c552780-7e29-4575-923d-f0e9ae8b5cce,Namespace:kube-system,Attempt:0,}" Jan 29 16:24:13.140479 containerd[1500]: time="2025-01-29T16:24:13.139745857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:13.140479 containerd[1500]: time="2025-01-29T16:24:13.140057774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:13.140479 containerd[1500]: time="2025-01-29T16:24:13.140092352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:13.143978 containerd[1500]: time="2025-01-29T16:24:13.143070862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:13.148010 containerd[1500]: time="2025-01-29T16:24:13.146722493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:13.148010 containerd[1500]: time="2025-01-29T16:24:13.147636228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:13.148010 containerd[1500]: time="2025-01-29T16:24:13.147667520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:13.148010 containerd[1500]: time="2025-01-29T16:24:13.147802486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:13.179190 systemd[1]: Started cri-containerd-03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4.scope - libcontainer container 03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4. Jan 29 16:24:13.187107 containerd[1500]: time="2025-01-29T16:24:13.186997789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:13.187552 containerd[1500]: time="2025-01-29T16:24:13.187199813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:13.187552 containerd[1500]: time="2025-01-29T16:24:13.187256263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:13.187949 containerd[1500]: time="2025-01-29T16:24:13.187481750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:13.210825 systemd[1]: Started cri-containerd-230d1d75522b2e8af1b4f179932c93fe1fffaa4f5f4ce1e120bf2c3b6baf9177.scope - libcontainer container 230d1d75522b2e8af1b4f179932c93fe1fffaa4f5f4ce1e120bf2c3b6baf9177. Jan 29 16:24:13.225242 systemd[1]: Started cri-containerd-2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7.scope - libcontainer container 2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7. Jan 29 16:24:13.283698 containerd[1500]: time="2025-01-29T16:24:13.283319836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2lwv8,Uid:4c552780-7e29-4575-923d-f0e9ae8b5cce,Namespace:kube-system,Attempt:0,} returns sandbox id \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\"" Jan 29 16:24:13.285282 containerd[1500]: time="2025-01-29T16:24:13.284905650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-97stm,Uid:bdd3582b-2d91-4490-934a-4b8a757b897e,Namespace:kube-system,Attempt:0,} returns sandbox id \"230d1d75522b2e8af1b4f179932c93fe1fffaa4f5f4ce1e120bf2c3b6baf9177\"" Jan 29 16:24:13.293547 containerd[1500]: time="2025-01-29T16:24:13.293352514Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:24:13.295604 containerd[1500]: time="2025-01-29T16:24:13.295551807Z" level=info msg="CreateContainer within sandbox \"230d1d75522b2e8af1b4f179932c93fe1fffaa4f5f4ce1e120bf2c3b6baf9177\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:24:13.319471 containerd[1500]: time="2025-01-29T16:24:13.318667459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-z578m,Uid:7802141a-6f7c-4695-92f1-7e02e5695f67,Namespace:kube-system,Attempt:0,} returns sandbox id \"03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4\"" Jan 29 16:24:13.332582 containerd[1500]: time="2025-01-29T16:24:13.332482921Z" level=info msg="CreateContainer within sandbox \"230d1d75522b2e8af1b4f179932c93fe1fffaa4f5f4ce1e120bf2c3b6baf9177\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8505dc30c264f3d8eb3a218cb39fce5b8fe1890b8ca4a84c239f95e47219bcbf\"" Jan 29 16:24:13.335005 containerd[1500]: time="2025-01-29T16:24:13.333392547Z" level=info msg="StartContainer for \"8505dc30c264f3d8eb3a218cb39fce5b8fe1890b8ca4a84c239f95e47219bcbf\"" Jan 29 16:24:13.376819 systemd[1]: Started cri-containerd-8505dc30c264f3d8eb3a218cb39fce5b8fe1890b8ca4a84c239f95e47219bcbf.scope - libcontainer container 8505dc30c264f3d8eb3a218cb39fce5b8fe1890b8ca4a84c239f95e47219bcbf. Jan 29 16:24:13.420122 containerd[1500]: time="2025-01-29T16:24:13.420035448Z" level=info msg="StartContainer for \"8505dc30c264f3d8eb3a218cb39fce5b8fe1890b8ca4a84c239f95e47219bcbf\" returns successfully" Jan 29 16:24:13.954893 kubelet[2718]: I0129 16:24:13.954807 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-97stm" podStartSLOduration=2.954783771 podStartE2EDuration="2.954783771s" podCreationTimestamp="2025-01-29 16:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:24:13.954424718 +0000 UTC m=+17.372481834" watchObservedRunningTime="2025-01-29 16:24:13.954783771 +0000 UTC m=+17.372840887" Jan 29 16:24:16.020793 systemd[1]: Started sshd@7-10.128.0.57:22-110.39.182.66:57607.service - OpenSSH per-connection server daemon (110.39.182.66:57607). Jan 29 16:24:18.573581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927893140.mount: Deactivated successfully. Jan 29 16:24:19.268940 sshd[3101]: Invalid user centos from 110.39.182.66 port 57607 Jan 29 16:24:20.056169 sshd[3101]: PAM: Permission denied for illegal user centos from 110.39.182.66 Jan 29 16:24:20.056539 sshd[3101]: Failed keyboard-interactive/pam for invalid user centos from 110.39.182.66 port 57607 ssh2 Jan 29 16:24:20.885931 sshd[3101]: Connection closed by invalid user centos 110.39.182.66 port 57607 [preauth] Jan 29 16:24:20.891279 systemd[1]: sshd@7-10.128.0.57:22-110.39.182.66:57607.service: Deactivated successfully. Jan 29 16:24:21.283380 containerd[1500]: time="2025-01-29T16:24:21.283220955Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:21.284980 containerd[1500]: time="2025-01-29T16:24:21.284907575Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:24:21.286471 containerd[1500]: time="2025-01-29T16:24:21.286391145Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:21.288831 containerd[1500]: time="2025-01-29T16:24:21.288670034Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.995264519s" Jan 29 16:24:21.288831 containerd[1500]: time="2025-01-29T16:24:21.288715431Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:24:21.291004 containerd[1500]: time="2025-01-29T16:24:21.290730123Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:24:21.292470 containerd[1500]: time="2025-01-29T16:24:21.292435865Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:24:21.313231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303488242.mount: Deactivated successfully. Jan 29 16:24:21.315305 containerd[1500]: time="2025-01-29T16:24:21.315247676Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\"" Jan 29 16:24:21.317472 containerd[1500]: time="2025-01-29T16:24:21.316290566Z" level=info msg="StartContainer for \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\"" Jan 29 16:24:21.368804 systemd[1]: Started cri-containerd-774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5.scope - libcontainer container 774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5. Jan 29 16:24:21.409977 containerd[1500]: time="2025-01-29T16:24:21.409811487Z" level=info msg="StartContainer for \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\" returns successfully" Jan 29 16:24:21.427721 systemd[1]: cri-containerd-774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5.scope: Deactivated successfully. Jan 29 16:24:22.305750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5-rootfs.mount: Deactivated successfully. Jan 29 16:24:23.266260 containerd[1500]: time="2025-01-29T16:24:23.266112037Z" level=info msg="shim disconnected" id=774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5 namespace=k8s.io Jan 29 16:24:23.266260 containerd[1500]: time="2025-01-29T16:24:23.266186602Z" level=warning msg="cleaning up after shim disconnected" id=774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5 namespace=k8s.io Jan 29 16:24:23.266260 containerd[1500]: time="2025-01-29T16:24:23.266215251Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:24:23.672326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379028879.mount: Deactivated successfully. Jan 29 16:24:23.985473 containerd[1500]: time="2025-01-29T16:24:23.982537758Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:24:24.012806 containerd[1500]: time="2025-01-29T16:24:24.012725232Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\"" Jan 29 16:24:24.015592 containerd[1500]: time="2025-01-29T16:24:24.015086838Z" level=info msg="StartContainer for \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\"" Jan 29 16:24:24.086839 systemd[1]: Started cri-containerd-f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02.scope - libcontainer container f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02. Jan 29 16:24:24.168994 containerd[1500]: time="2025-01-29T16:24:24.168908730Z" level=info msg="StartContainer for \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\" returns successfully" Jan 29 16:24:24.197447 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:24:24.199121 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:24.199862 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:24:24.211212 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:24:24.211636 systemd[1]: cri-containerd-f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02.scope: Deactivated successfully. Jan 29 16:24:24.257328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:24.325038 containerd[1500]: time="2025-01-29T16:24:24.324728428Z" level=info msg="shim disconnected" id=f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02 namespace=k8s.io Jan 29 16:24:24.325038 containerd[1500]: time="2025-01-29T16:24:24.324803766Z" level=warning msg="cleaning up after shim disconnected" id=f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02 namespace=k8s.io Jan 29 16:24:24.325038 containerd[1500]: time="2025-01-29T16:24:24.324817875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:24:24.652643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02-rootfs.mount: Deactivated successfully. Jan 29 16:24:24.680114 containerd[1500]: time="2025-01-29T16:24:24.680043480Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:24.681437 containerd[1500]: time="2025-01-29T16:24:24.681374611Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:24:24.682986 containerd[1500]: time="2025-01-29T16:24:24.682865713Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:24.684942 containerd[1500]: time="2025-01-29T16:24:24.684899550Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.39412487s" Jan 29 16:24:24.685218 containerd[1500]: time="2025-01-29T16:24:24.685084264Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:24:24.688344 containerd[1500]: time="2025-01-29T16:24:24.688185260Z" level=info msg="CreateContainer within sandbox \"03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:24:24.712974 containerd[1500]: time="2025-01-29T16:24:24.712908448Z" level=info msg="CreateContainer within sandbox \"03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\"" Jan 29 16:24:24.713809 containerd[1500]: time="2025-01-29T16:24:24.713737860Z" level=info msg="StartContainer for \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\"" Jan 29 16:24:24.761879 systemd[1]: Started cri-containerd-9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff.scope - libcontainer container 9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff. Jan 29 16:24:24.802183 containerd[1500]: time="2025-01-29T16:24:24.801731621Z" level=info msg="StartContainer for \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\" returns successfully" Jan 29 16:24:24.992718 containerd[1500]: time="2025-01-29T16:24:24.992423697Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:24:25.023878 containerd[1500]: time="2025-01-29T16:24:25.023701659Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\"" Jan 29 16:24:25.025596 containerd[1500]: time="2025-01-29T16:24:25.024352004Z" level=info msg="StartContainer for \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\"" Jan 29 16:24:25.089810 systemd[1]: Started cri-containerd-afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb.scope - libcontainer container afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb. Jan 29 16:24:25.149437 kubelet[2718]: I0129 16:24:25.149350 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-z578m" podStartSLOduration=2.784272382 podStartE2EDuration="14.149321706s" podCreationTimestamp="2025-01-29 16:24:11 +0000 UTC" firstStartedPulling="2025-01-29 16:24:13.32110686 +0000 UTC m=+16.739163954" lastFinishedPulling="2025-01-29 16:24:24.686156153 +0000 UTC m=+28.104213278" observedRunningTime="2025-01-29 16:24:25.021425886 +0000 UTC m=+28.439483002" watchObservedRunningTime="2025-01-29 16:24:25.149321706 +0000 UTC m=+28.567378823" Jan 29 16:24:25.175600 containerd[1500]: time="2025-01-29T16:24:25.175154196Z" level=info msg="StartContainer for \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\" returns successfully" Jan 29 16:24:25.192841 systemd[1]: cri-containerd-afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb.scope: Deactivated successfully. Jan 29 16:24:25.322678 containerd[1500]: time="2025-01-29T16:24:25.321663047Z" level=info msg="shim disconnected" id=afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb namespace=k8s.io Jan 29 16:24:25.324717 containerd[1500]: time="2025-01-29T16:24:25.323623584Z" level=warning msg="cleaning up after shim disconnected" id=afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb namespace=k8s.io Jan 29 16:24:25.324717 containerd[1500]: time="2025-01-29T16:24:25.323654848Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:24:25.997860 containerd[1500]: time="2025-01-29T16:24:25.997523935Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:24:26.024028 containerd[1500]: time="2025-01-29T16:24:26.023966794Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\"" Jan 29 16:24:26.024966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384383831.mount: Deactivated successfully. Jan 29 16:24:26.025443 containerd[1500]: time="2025-01-29T16:24:26.025210058Z" level=info msg="StartContainer for \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\"" Jan 29 16:24:26.074780 systemd[1]: Started cri-containerd-064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55.scope - libcontainer container 064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55. Jan 29 16:24:26.114340 systemd[1]: cri-containerd-064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55.scope: Deactivated successfully. Jan 29 16:24:26.117803 containerd[1500]: time="2025-01-29T16:24:26.117449160Z" level=info msg="StartContainer for \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\" returns successfully" Jan 29 16:24:26.148220 containerd[1500]: time="2025-01-29T16:24:26.148102519Z" level=info msg="shim disconnected" id=064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55 namespace=k8s.io Jan 29 16:24:26.148220 containerd[1500]: time="2025-01-29T16:24:26.148181313Z" level=warning msg="cleaning up after shim disconnected" id=064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55 namespace=k8s.io Jan 29 16:24:26.148220 containerd[1500]: time="2025-01-29T16:24:26.148198605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:24:26.652327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55-rootfs.mount: Deactivated successfully. Jan 29 16:24:27.013645 containerd[1500]: time="2025-01-29T16:24:27.010109630Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:24:27.052695 containerd[1500]: time="2025-01-29T16:24:27.052643026Z" level=info msg="CreateContainer within sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\"" Jan 29 16:24:27.055551 containerd[1500]: time="2025-01-29T16:24:27.055509605Z" level=info msg="StartContainer for \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\"" Jan 29 16:24:27.147285 systemd[1]: Started cri-containerd-a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c.scope - libcontainer container a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c. Jan 29 16:24:27.281360 containerd[1500]: time="2025-01-29T16:24:27.281229814Z" level=info msg="StartContainer for \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\" returns successfully" Jan 29 16:24:27.401124 kubelet[2718]: I0129 16:24:27.401067 2718 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:24:27.440345 kubelet[2718]: I0129 16:24:27.440290 2718 topology_manager.go:215] "Topology Admit Handler" podUID="055cdde1-9660-46dc-a260-dffca1e1959b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rhc68" Jan 29 16:24:27.446705 kubelet[2718]: I0129 16:24:27.446645 2718 topology_manager.go:215] "Topology Admit Handler" podUID="91458dfe-1c42-4a86-be11-b8cf6618a860" podNamespace="kube-system" podName="coredns-7db6d8ff4d-txgdg" Jan 29 16:24:27.448792 kubelet[2718]: W0129 16:24:27.448757 2718 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal' and this object Jan 29 16:24:27.448792 kubelet[2718]: E0129 16:24:27.448802 2718 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal' and this object Jan 29 16:24:27.454682 systemd[1]: Created slice kubepods-burstable-pod055cdde1_9660_46dc_a260_dffca1e1959b.slice - libcontainer container kubepods-burstable-pod055cdde1_9660_46dc_a260_dffca1e1959b.slice. Jan 29 16:24:27.471160 systemd[1]: Created slice kubepods-burstable-pod91458dfe_1c42_4a86_be11_b8cf6618a860.slice - libcontainer container kubepods-burstable-pod91458dfe_1c42_4a86_be11_b8cf6618a860.slice. Jan 29 16:24:27.509628 kubelet[2718]: I0129 16:24:27.509580 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/055cdde1-9660-46dc-a260-dffca1e1959b-config-volume\") pod \"coredns-7db6d8ff4d-rhc68\" (UID: \"055cdde1-9660-46dc-a260-dffca1e1959b\") " pod="kube-system/coredns-7db6d8ff4d-rhc68" Jan 29 16:24:27.510045 kubelet[2718]: I0129 16:24:27.510005 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxppx\" (UniqueName: \"kubernetes.io/projected/91458dfe-1c42-4a86-be11-b8cf6618a860-kube-api-access-sxppx\") pod \"coredns-7db6d8ff4d-txgdg\" (UID: \"91458dfe-1c42-4a86-be11-b8cf6618a860\") " pod="kube-system/coredns-7db6d8ff4d-txgdg" Jan 29 16:24:27.510279 kubelet[2718]: I0129 16:24:27.510257 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx7hk\" (UniqueName: \"kubernetes.io/projected/055cdde1-9660-46dc-a260-dffca1e1959b-kube-api-access-bx7hk\") pod \"coredns-7db6d8ff4d-rhc68\" (UID: \"055cdde1-9660-46dc-a260-dffca1e1959b\") " pod="kube-system/coredns-7db6d8ff4d-rhc68" Jan 29 16:24:27.510441 kubelet[2718]: I0129 16:24:27.510421 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91458dfe-1c42-4a86-be11-b8cf6618a860-config-volume\") pod \"coredns-7db6d8ff4d-txgdg\" (UID: \"91458dfe-1c42-4a86-be11-b8cf6618a860\") " pod="kube-system/coredns-7db6d8ff4d-txgdg" Jan 29 16:24:28.031484 kubelet[2718]: I0129 16:24:28.030528 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2lwv8" podStartSLOduration=9.030739522 podStartE2EDuration="17.030173328s" podCreationTimestamp="2025-01-29 16:24:11 +0000 UTC" firstStartedPulling="2025-01-29 16:24:13.290654733 +0000 UTC m=+16.708711827" lastFinishedPulling="2025-01-29 16:24:21.290088522 +0000 UTC m=+24.708145633" observedRunningTime="2025-01-29 16:24:28.029794381 +0000 UTC m=+31.447851493" watchObservedRunningTime="2025-01-29 16:24:28.030173328 +0000 UTC m=+31.448230444" Jan 29 16:24:28.666309 containerd[1500]: time="2025-01-29T16:24:28.666229310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rhc68,Uid:055cdde1-9660-46dc-a260-dffca1e1959b,Namespace:kube-system,Attempt:0,}" Jan 29 16:24:28.684591 containerd[1500]: time="2025-01-29T16:24:28.683846717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-txgdg,Uid:91458dfe-1c42-4a86-be11-b8cf6618a860,Namespace:kube-system,Attempt:0,}" Jan 29 16:24:29.577868 systemd-networkd[1350]: cilium_host: Link UP Jan 29 16:24:29.578108 systemd-networkd[1350]: cilium_net: Link UP Jan 29 16:24:29.578588 systemd-networkd[1350]: cilium_net: Gained carrier Jan 29 16:24:29.578883 systemd-networkd[1350]: cilium_host: Gained carrier Jan 29 16:24:29.579093 systemd-networkd[1350]: cilium_net: Gained IPv6LL Jan 29 16:24:29.579371 systemd-networkd[1350]: cilium_host: Gained IPv6LL Jan 29 16:24:29.724075 systemd-networkd[1350]: cilium_vxlan: Link UP Jan 29 16:24:29.724088 systemd-networkd[1350]: cilium_vxlan: Gained carrier Jan 29 16:24:30.007817 kernel: NET: Registered PF_ALG protocol family Jan 29 16:24:30.870759 systemd-networkd[1350]: lxc_health: Link UP Jan 29 16:24:30.883904 systemd-networkd[1350]: lxc_health: Gained carrier Jan 29 16:24:31.251712 systemd-networkd[1350]: lxc54cd5daa578c: Link UP Jan 29 16:24:31.259595 kernel: eth0: renamed from tmpf64bf Jan 29 16:24:31.274318 systemd-networkd[1350]: lxc54cd5daa578c: Gained carrier Jan 29 16:24:31.284947 systemd-networkd[1350]: lxc7aa0dc40fb7b: Link UP Jan 29 16:24:31.300737 kernel: eth0: renamed from tmp38098 Jan 29 16:24:31.317588 systemd-networkd[1350]: lxc7aa0dc40fb7b: Gained carrier Jan 29 16:24:31.470897 systemd-networkd[1350]: cilium_vxlan: Gained IPv6LL Jan 29 16:24:32.431500 systemd-networkd[1350]: lxc54cd5daa578c: Gained IPv6LL Jan 29 16:24:32.558871 systemd-networkd[1350]: lxc_health: Gained IPv6LL Jan 29 16:24:32.623347 systemd-networkd[1350]: lxc7aa0dc40fb7b: Gained IPv6LL Jan 29 16:24:35.034543 ntpd[1467]: Listen normally on 8 cilium_host 192.168.0.208:123 Jan 29 16:24:35.034955 ntpd[1467]: Listen normally on 9 cilium_net [fe80::38c2:96ff:fe80:31c8%4]:123 Jan 29 16:24:35.036040 ntpd[1467]: 29 Jan 16:24:35 ntpd[1467]: Listen normally on 8 cilium_host 192.168.0.208:123 Jan 29 16:24:35.036040 ntpd[1467]: 29 Jan 16:24:35 ntpd[1467]: Listen normally on 9 cilium_net [fe80::38c2:96ff:fe80:31c8%4]:123 Jan 29 16:24:35.036040 ntpd[1467]: 29 Jan 16:24:35 ntpd[1467]: Listen normally on 10 cilium_host [fe80::d4bd:24ff:fec2:9000%5]:123 Jan 29 16:24:35.036040 ntpd[1467]: 29 Jan 16:24:35 ntpd[1467]: Listen normally on 11 cilium_vxlan [fe80::fce1:18ff:fe7f:79c9%6]:123 Jan 29 16:24:35.036040 ntpd[1467]: 29 Jan 16:24:35 ntpd[1467]: Listen normally on 12 lxc_health [fe80::f0b2:3dff:fe26:5ed3%8]:123 Jan 29 16:24:35.036040 ntpd[1467]: 29 Jan 16:24:35 ntpd[1467]: Listen normally on 13 lxc54cd5daa578c [fe80::34fb:dbff:fe33:bcee%10]:123 Jan 29 16:24:35.036040 ntpd[1467]: 29 Jan 16:24:35 ntpd[1467]: Listen normally on 14 lxc7aa0dc40fb7b [fe80::3072:eff:fe9a:8d4%12]:123 Jan 29 16:24:35.035034 ntpd[1467]: Listen normally on 10 cilium_host [fe80::d4bd:24ff:fec2:9000%5]:123 Jan 29 16:24:35.035094 ntpd[1467]: Listen normally on 11 cilium_vxlan [fe80::fce1:18ff:fe7f:79c9%6]:123 Jan 29 16:24:35.035149 ntpd[1467]: Listen normally on 12 lxc_health [fe80::f0b2:3dff:fe26:5ed3%8]:123 Jan 29 16:24:35.035206 ntpd[1467]: Listen normally on 13 lxc54cd5daa578c [fe80::34fb:dbff:fe33:bcee%10]:123 Jan 29 16:24:35.035273 ntpd[1467]: Listen normally on 14 lxc7aa0dc40fb7b [fe80::3072:eff:fe9a:8d4%12]:123 Jan 29 16:24:36.372651 containerd[1500]: time="2025-01-29T16:24:36.371012576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:36.372651 containerd[1500]: time="2025-01-29T16:24:36.371097326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:36.372651 containerd[1500]: time="2025-01-29T16:24:36.371125412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:36.375268 containerd[1500]: time="2025-01-29T16:24:36.375030627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:36.428788 systemd[1]: Started cri-containerd-f64bfafdcd6dd6ac27e66277b29d287a2b76bdb18a89f4d5a95a2e4329a47ed1.scope - libcontainer container f64bfafdcd6dd6ac27e66277b29d287a2b76bdb18a89f4d5a95a2e4329a47ed1. Jan 29 16:24:36.466888 containerd[1500]: time="2025-01-29T16:24:36.465496133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:36.466888 containerd[1500]: time="2025-01-29T16:24:36.466678554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:36.468654 containerd[1500]: time="2025-01-29T16:24:36.467184084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:36.469013 containerd[1500]: time="2025-01-29T16:24:36.468837489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:36.513832 systemd[1]: Started cri-containerd-38098f5cbbccdabcd5f13a7f6e4661820288f80d994cc922b090defb01423319.scope - libcontainer container 38098f5cbbccdabcd5f13a7f6e4661820288f80d994cc922b090defb01423319. Jan 29 16:24:36.572137 containerd[1500]: time="2025-01-29T16:24:36.571988220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rhc68,Uid:055cdde1-9660-46dc-a260-dffca1e1959b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f64bfafdcd6dd6ac27e66277b29d287a2b76bdb18a89f4d5a95a2e4329a47ed1\"" Jan 29 16:24:36.580924 containerd[1500]: time="2025-01-29T16:24:36.580863120Z" level=info msg="CreateContainer within sandbox \"f64bfafdcd6dd6ac27e66277b29d287a2b76bdb18a89f4d5a95a2e4329a47ed1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:24:36.618357 containerd[1500]: time="2025-01-29T16:24:36.616933784Z" level=info msg="CreateContainer within sandbox \"f64bfafdcd6dd6ac27e66277b29d287a2b76bdb18a89f4d5a95a2e4329a47ed1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"864099d4c163d8b5b8fd99d2e92e0b19cc07a897f114f5a7498fa7b109e278e8\"" Jan 29 16:24:36.619813 containerd[1500]: time="2025-01-29T16:24:36.619769835Z" level=info msg="StartContainer for \"864099d4c163d8b5b8fd99d2e92e0b19cc07a897f114f5a7498fa7b109e278e8\"" Jan 29 16:24:36.645674 containerd[1500]: time="2025-01-29T16:24:36.645064980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-txgdg,Uid:91458dfe-1c42-4a86-be11-b8cf6618a860,Namespace:kube-system,Attempt:0,} returns sandbox id \"38098f5cbbccdabcd5f13a7f6e4661820288f80d994cc922b090defb01423319\"" Jan 29 16:24:36.654746 containerd[1500]: time="2025-01-29T16:24:36.654423634Z" level=info msg="CreateContainer within sandbox \"38098f5cbbccdabcd5f13a7f6e4661820288f80d994cc922b090defb01423319\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:24:36.683699 containerd[1500]: time="2025-01-29T16:24:36.680860476Z" level=info msg="CreateContainer within sandbox \"38098f5cbbccdabcd5f13a7f6e4661820288f80d994cc922b090defb01423319\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8abd243bbbd4a6c2ebe5087e8a000a061407f6f48a442b2b40e1d407fbe87b6\"" Jan 29 16:24:36.686221 containerd[1500]: time="2025-01-29T16:24:36.686173864Z" level=info msg="StartContainer for \"b8abd243bbbd4a6c2ebe5087e8a000a061407f6f48a442b2b40e1d407fbe87b6\"" Jan 29 16:24:36.688808 systemd[1]: Started cri-containerd-864099d4c163d8b5b8fd99d2e92e0b19cc07a897f114f5a7498fa7b109e278e8.scope - libcontainer container 864099d4c163d8b5b8fd99d2e92e0b19cc07a897f114f5a7498fa7b109e278e8. Jan 29 16:24:36.770657 systemd[1]: Started cri-containerd-b8abd243bbbd4a6c2ebe5087e8a000a061407f6f48a442b2b40e1d407fbe87b6.scope - libcontainer container b8abd243bbbd4a6c2ebe5087e8a000a061407f6f48a442b2b40e1d407fbe87b6. Jan 29 16:24:36.771947 containerd[1500]: time="2025-01-29T16:24:36.771863303Z" level=info msg="StartContainer for \"864099d4c163d8b5b8fd99d2e92e0b19cc07a897f114f5a7498fa7b109e278e8\" returns successfully" Jan 29 16:24:36.825589 containerd[1500]: time="2025-01-29T16:24:36.825050452Z" level=info msg="StartContainer for \"b8abd243bbbd4a6c2ebe5087e8a000a061407f6f48a442b2b40e1d407fbe87b6\" returns successfully" Jan 29 16:24:37.051147 kubelet[2718]: I0129 16:24:37.050627 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-txgdg" podStartSLOduration=26.050601735 podStartE2EDuration="26.050601735s" podCreationTimestamp="2025-01-29 16:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:24:37.04813737 +0000 UTC m=+40.466194484" watchObservedRunningTime="2025-01-29 16:24:37.050601735 +0000 UTC m=+40.468658854" Jan 29 16:24:37.068897 kubelet[2718]: I0129 16:24:37.068317 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rhc68" podStartSLOduration=26.068295314 podStartE2EDuration="26.068295314s" podCreationTimestamp="2025-01-29 16:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:24:37.06827269 +0000 UTC m=+40.486329807" watchObservedRunningTime="2025-01-29 16:24:37.068295314 +0000 UTC m=+40.486352435" Jan 29 16:25:19.647018 systemd[1]: Started sshd@8-10.128.0.57:22-147.75.109.163:53598.service - OpenSSH per-connection server daemon (147.75.109.163:53598). Jan 29 16:25:19.947880 sshd[4106]: Accepted publickey for core from 147.75.109.163 port 53598 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:19.949894 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:19.957377 systemd-logind[1483]: New session 8 of user core. Jan 29 16:25:19.963789 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:25:20.265504 sshd[4108]: Connection closed by 147.75.109.163 port 53598 Jan 29 16:25:20.267168 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:20.271850 systemd[1]: sshd@8-10.128.0.57:22-147.75.109.163:53598.service: Deactivated successfully. Jan 29 16:25:20.275273 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:25:20.277652 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:25:20.279371 systemd-logind[1483]: Removed session 8. Jan 29 16:25:25.323009 systemd[1]: Started sshd@9-10.128.0.57:22-147.75.109.163:53602.service - OpenSSH per-connection server daemon (147.75.109.163:53602). Jan 29 16:25:25.620045 sshd[4121]: Accepted publickey for core from 147.75.109.163 port 53602 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:25.621956 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:25.629386 systemd-logind[1483]: New session 9 of user core. Jan 29 16:25:25.634829 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:25:25.918300 sshd[4123]: Connection closed by 147.75.109.163 port 53602 Jan 29 16:25:25.919484 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:25.924432 systemd[1]: sshd@9-10.128.0.57:22-147.75.109.163:53602.service: Deactivated successfully. Jan 29 16:25:25.927810 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:25:25.930339 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:25:25.932457 systemd-logind[1483]: Removed session 9. Jan 29 16:25:30.977625 systemd[1]: Started sshd@10-10.128.0.57:22-147.75.109.163:58740.service - OpenSSH per-connection server daemon (147.75.109.163:58740). Jan 29 16:25:31.275263 sshd[4136]: Accepted publickey for core from 147.75.109.163 port 58740 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:31.277275 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:31.284645 systemd-logind[1483]: New session 10 of user core. Jan 29 16:25:31.289804 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:25:31.567035 sshd[4138]: Connection closed by 147.75.109.163 port 58740 Jan 29 16:25:31.567971 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:31.574197 systemd[1]: sshd@10-10.128.0.57:22-147.75.109.163:58740.service: Deactivated successfully. Jan 29 16:25:31.577476 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:25:31.578658 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:25:31.580674 systemd-logind[1483]: Removed session 10. Jan 29 16:25:36.624083 systemd[1]: Started sshd@11-10.128.0.57:22-147.75.109.163:58752.service - OpenSSH per-connection server daemon (147.75.109.163:58752). Jan 29 16:25:36.923872 sshd[4152]: Accepted publickey for core from 147.75.109.163 port 58752 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:36.926039 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:36.934712 systemd-logind[1483]: New session 11 of user core. Jan 29 16:25:36.939844 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:25:37.223201 sshd[4154]: Connection closed by 147.75.109.163 port 58752 Jan 29 16:25:37.223698 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:37.229888 systemd[1]: sshd@11-10.128.0.57:22-147.75.109.163:58752.service: Deactivated successfully. Jan 29 16:25:37.232743 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:25:37.233997 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:25:37.235890 systemd-logind[1483]: Removed session 11. Jan 29 16:25:37.280954 systemd[1]: Started sshd@12-10.128.0.57:22-147.75.109.163:58758.service - OpenSSH per-connection server daemon (147.75.109.163:58758). Jan 29 16:25:37.587499 sshd[4167]: Accepted publickey for core from 147.75.109.163 port 58758 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:37.589343 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:37.595848 systemd-logind[1483]: New session 12 of user core. Jan 29 16:25:37.604861 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:25:37.949293 sshd[4169]: Connection closed by 147.75.109.163 port 58758 Jan 29 16:25:37.950606 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:37.960199 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:25:37.965168 systemd[1]: sshd@12-10.128.0.57:22-147.75.109.163:58758.service: Deactivated successfully. Jan 29 16:25:37.970083 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:25:37.973091 systemd-logind[1483]: Removed session 12. Jan 29 16:25:38.007991 systemd[1]: Started sshd@13-10.128.0.57:22-147.75.109.163:40964.service - OpenSSH per-connection server daemon (147.75.109.163:40964). Jan 29 16:25:38.303775 sshd[4179]: Accepted publickey for core from 147.75.109.163 port 40964 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:38.305242 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:38.311868 systemd-logind[1483]: New session 13 of user core. Jan 29 16:25:38.317859 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:25:38.593618 sshd[4181]: Connection closed by 147.75.109.163 port 40964 Jan 29 16:25:38.594523 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:38.599661 systemd[1]: sshd@13-10.128.0.57:22-147.75.109.163:40964.service: Deactivated successfully. Jan 29 16:25:38.603004 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:25:38.605206 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:25:38.607416 systemd-logind[1483]: Removed session 13. Jan 29 16:25:43.649006 systemd[1]: Started sshd@14-10.128.0.57:22-147.75.109.163:40978.service - OpenSSH per-connection server daemon (147.75.109.163:40978). Jan 29 16:25:43.950590 sshd[4196]: Accepted publickey for core from 147.75.109.163 port 40978 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:43.952606 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:43.960909 systemd-logind[1483]: New session 14 of user core. Jan 29 16:25:43.967857 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:25:44.246044 sshd[4198]: Connection closed by 147.75.109.163 port 40978 Jan 29 16:25:44.247326 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:44.251843 systemd[1]: sshd@14-10.128.0.57:22-147.75.109.163:40978.service: Deactivated successfully. Jan 29 16:25:44.255066 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:25:44.257350 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:25:44.259133 systemd-logind[1483]: Removed session 14. Jan 29 16:25:49.302388 systemd[1]: Started sshd@15-10.128.0.57:22-147.75.109.163:59608.service - OpenSSH per-connection server daemon (147.75.109.163:59608). Jan 29 16:25:49.598983 sshd[4209]: Accepted publickey for core from 147.75.109.163 port 59608 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:49.601040 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:49.608161 systemd-logind[1483]: New session 15 of user core. Jan 29 16:25:49.611805 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:25:49.892257 sshd[4211]: Connection closed by 147.75.109.163 port 59608 Jan 29 16:25:49.893735 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:49.899492 systemd[1]: sshd@15-10.128.0.57:22-147.75.109.163:59608.service: Deactivated successfully. Jan 29 16:25:49.902380 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:25:49.903818 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:25:49.905436 systemd-logind[1483]: Removed session 15. Jan 29 16:25:49.951039 systemd[1]: Started sshd@16-10.128.0.57:22-147.75.109.163:59610.service - OpenSSH per-connection server daemon (147.75.109.163:59610). Jan 29 16:25:50.246162 sshd[4223]: Accepted publickey for core from 147.75.109.163 port 59610 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:50.248313 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:50.255758 systemd-logind[1483]: New session 16 of user core. Jan 29 16:25:50.260788 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:25:50.607150 sshd[4225]: Connection closed by 147.75.109.163 port 59610 Jan 29 16:25:50.608099 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:50.613886 systemd[1]: sshd@16-10.128.0.57:22-147.75.109.163:59610.service: Deactivated successfully. Jan 29 16:25:50.616920 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:25:50.618554 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:25:50.620220 systemd-logind[1483]: Removed session 16. Jan 29 16:25:50.665000 systemd[1]: Started sshd@17-10.128.0.57:22-147.75.109.163:59626.service - OpenSSH per-connection server daemon (147.75.109.163:59626). Jan 29 16:25:50.962372 sshd[4235]: Accepted publickey for core from 147.75.109.163 port 59626 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:50.964586 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:50.973665 systemd-logind[1483]: New session 17 of user core. Jan 29 16:25:50.978804 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:25:52.818374 sshd[4237]: Connection closed by 147.75.109.163 port 59626 Jan 29 16:25:52.820898 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:52.826737 systemd[1]: sshd@17-10.128.0.57:22-147.75.109.163:59626.service: Deactivated successfully. Jan 29 16:25:52.829918 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:25:52.832218 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:25:52.835318 systemd-logind[1483]: Removed session 17. Jan 29 16:25:52.877473 systemd[1]: Started sshd@18-10.128.0.57:22-147.75.109.163:59640.service - OpenSSH per-connection server daemon (147.75.109.163:59640). Jan 29 16:25:53.177778 sshd[4255]: Accepted publickey for core from 147.75.109.163 port 59640 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:53.180045 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:53.192300 systemd-logind[1483]: New session 18 of user core. Jan 29 16:25:53.196225 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:25:53.633231 sshd[4257]: Connection closed by 147.75.109.163 port 59640 Jan 29 16:25:53.634264 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:53.639447 systemd[1]: sshd@18-10.128.0.57:22-147.75.109.163:59640.service: Deactivated successfully. Jan 29 16:25:53.643237 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:25:53.646794 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:25:53.649736 systemd-logind[1483]: Removed session 18. Jan 29 16:25:53.694238 systemd[1]: Started sshd@19-10.128.0.57:22-147.75.109.163:59652.service - OpenSSH per-connection server daemon (147.75.109.163:59652). Jan 29 16:25:53.992495 sshd[4266]: Accepted publickey for core from 147.75.109.163 port 59652 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:53.994419 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:54.000674 systemd-logind[1483]: New session 19 of user core. Jan 29 16:25:54.005811 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:25:54.288308 sshd[4268]: Connection closed by 147.75.109.163 port 59652 Jan 29 16:25:54.288901 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:54.294072 systemd[1]: sshd@19-10.128.0.57:22-147.75.109.163:59652.service: Deactivated successfully. Jan 29 16:25:54.296855 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:25:54.298917 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:25:54.300619 systemd-logind[1483]: Removed session 19. Jan 29 16:25:59.346016 systemd[1]: Started sshd@20-10.128.0.57:22-147.75.109.163:55598.service - OpenSSH per-connection server daemon (147.75.109.163:55598). Jan 29 16:25:59.640048 sshd[4285]: Accepted publickey for core from 147.75.109.163 port 55598 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:25:59.641843 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:59.648100 systemd-logind[1483]: New session 20 of user core. Jan 29 16:25:59.653808 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:25:59.931624 sshd[4287]: Connection closed by 147.75.109.163 port 55598 Jan 29 16:25:59.933699 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:59.939258 systemd[1]: sshd@20-10.128.0.57:22-147.75.109.163:55598.service: Deactivated successfully. Jan 29 16:25:59.942493 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:25:59.943765 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:25:59.945256 systemd-logind[1483]: Removed session 20. Jan 29 16:26:04.993992 systemd[1]: Started sshd@21-10.128.0.57:22-147.75.109.163:55612.service - OpenSSH per-connection server daemon (147.75.109.163:55612). Jan 29 16:26:05.299493 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 55612 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:26:05.301062 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:05.307154 systemd-logind[1483]: New session 21 of user core. Jan 29 16:26:05.313958 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:26:05.584364 sshd[4302]: Connection closed by 147.75.109.163 port 55612 Jan 29 16:26:05.585784 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:05.591821 systemd[1]: sshd@21-10.128.0.57:22-147.75.109.163:55612.service: Deactivated successfully. Jan 29 16:26:05.594936 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:26:05.596270 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:26:05.597919 systemd-logind[1483]: Removed session 21. Jan 29 16:26:10.639969 systemd[1]: Started sshd@22-10.128.0.57:22-147.75.109.163:59250.service - OpenSSH per-connection server daemon (147.75.109.163:59250). Jan 29 16:26:10.937995 sshd[4314]: Accepted publickey for core from 147.75.109.163 port 59250 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:26:10.939676 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:10.946705 systemd-logind[1483]: New session 22 of user core. Jan 29 16:26:10.954841 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:26:11.235820 sshd[4316]: Connection closed by 147.75.109.163 port 59250 Jan 29 16:26:11.237108 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:11.242814 systemd[1]: sshd@22-10.128.0.57:22-147.75.109.163:59250.service: Deactivated successfully. Jan 29 16:26:11.245756 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:26:11.247011 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:26:11.248507 systemd-logind[1483]: Removed session 22. Jan 29 16:26:11.290963 systemd[1]: Started sshd@23-10.128.0.57:22-147.75.109.163:59264.service - OpenSSH per-connection server daemon (147.75.109.163:59264). Jan 29 16:26:11.590479 sshd[4328]: Accepted publickey for core from 147.75.109.163 port 59264 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:26:11.592290 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:11.598744 systemd-logind[1483]: New session 23 of user core. Jan 29 16:26:11.605916 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:26:13.379349 containerd[1500]: time="2025-01-29T16:26:13.379289277Z" level=info msg="StopContainer for \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\" with timeout 30 (s)" Jan 29 16:26:13.381999 containerd[1500]: time="2025-01-29T16:26:13.381955498Z" level=info msg="Stop container \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\" with signal terminated" Jan 29 16:26:13.409869 systemd[1]: run-containerd-runc-k8s.io-a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c-runc.gVo9qc.mount: Deactivated successfully. Jan 29 16:26:13.411261 systemd[1]: cri-containerd-9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff.scope: Deactivated successfully. Jan 29 16:26:13.430145 containerd[1500]: time="2025-01-29T16:26:13.429906257Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:26:13.443339 containerd[1500]: time="2025-01-29T16:26:13.443283944Z" level=info msg="StopContainer for \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\" with timeout 2 (s)" Jan 29 16:26:13.444097 containerd[1500]: time="2025-01-29T16:26:13.443754190Z" level=info msg="Stop container \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\" with signal terminated" Jan 29 16:26:13.461429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff-rootfs.mount: Deactivated successfully. Jan 29 16:26:13.465630 systemd-networkd[1350]: lxc_health: Link DOWN Jan 29 16:26:13.465672 systemd-networkd[1350]: lxc_health: Lost carrier Jan 29 16:26:13.484331 systemd[1]: cri-containerd-a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c.scope: Deactivated successfully. Jan 29 16:26:13.484789 systemd[1]: cri-containerd-a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c.scope: Consumed 9.709s CPU time, 126.3M memory peak, 152K read from disk, 13.3M written to disk. Jan 29 16:26:13.496365 containerd[1500]: time="2025-01-29T16:26:13.495702310Z" level=info msg="shim disconnected" id=9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff namespace=k8s.io Jan 29 16:26:13.496365 containerd[1500]: time="2025-01-29T16:26:13.495870913Z" level=warning msg="cleaning up after shim disconnected" id=9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff namespace=k8s.io Jan 29 16:26:13.496365 containerd[1500]: time="2025-01-29T16:26:13.495889050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:13.537611 containerd[1500]: time="2025-01-29T16:26:13.537496284Z" level=info msg="StopContainer for \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\" returns successfully" Jan 29 16:26:13.537925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c-rootfs.mount: Deactivated successfully. Jan 29 16:26:13.539677 containerd[1500]: time="2025-01-29T16:26:13.539512870Z" level=info msg="shim disconnected" id=a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c namespace=k8s.io Jan 29 16:26:13.539677 containerd[1500]: time="2025-01-29T16:26:13.539601698Z" level=warning msg="cleaning up after shim disconnected" id=a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c namespace=k8s.io Jan 29 16:26:13.539677 containerd[1500]: time="2025-01-29T16:26:13.539617456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:13.540840 containerd[1500]: time="2025-01-29T16:26:13.540710739Z" level=info msg="StopPodSandbox for \"03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4\"" Jan 29 16:26:13.540840 containerd[1500]: time="2025-01-29T16:26:13.540767321Z" level=info msg="Container to stop \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:13.550040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4-shm.mount: Deactivated successfully. Jan 29 16:26:13.563186 systemd[1]: cri-containerd-03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4.scope: Deactivated successfully. Jan 29 16:26:13.566979 containerd[1500]: time="2025-01-29T16:26:13.566896164Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:26:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:26:13.571434 containerd[1500]: time="2025-01-29T16:26:13.571290379Z" level=info msg="StopContainer for \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\" returns successfully" Jan 29 16:26:13.572244 containerd[1500]: time="2025-01-29T16:26:13.571948864Z" level=info msg="StopPodSandbox for \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\"" Jan 29 16:26:13.572244 containerd[1500]: time="2025-01-29T16:26:13.571998338Z" level=info msg="Container to stop \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:13.572244 containerd[1500]: time="2025-01-29T16:26:13.572060794Z" level=info msg="Container to stop \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:13.572244 containerd[1500]: time="2025-01-29T16:26:13.572078438Z" level=info msg="Container to stop \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:13.572244 containerd[1500]: time="2025-01-29T16:26:13.572096591Z" level=info msg="Container to stop \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:13.572244 containerd[1500]: time="2025-01-29T16:26:13.572112125Z" level=info msg="Container to stop \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:13.584506 systemd[1]: cri-containerd-2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7.scope: Deactivated successfully. Jan 29 16:26:13.620089 containerd[1500]: time="2025-01-29T16:26:13.619721470Z" level=info msg="shim disconnected" id=03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4 namespace=k8s.io Jan 29 16:26:13.620089 containerd[1500]: time="2025-01-29T16:26:13.619797756Z" level=warning msg="cleaning up after shim disconnected" id=03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4 namespace=k8s.io Jan 29 16:26:13.620089 containerd[1500]: time="2025-01-29T16:26:13.619812885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:13.620858 containerd[1500]: time="2025-01-29T16:26:13.620405454Z" level=info msg="shim disconnected" id=2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7 namespace=k8s.io Jan 29 16:26:13.620858 containerd[1500]: time="2025-01-29T16:26:13.620822047Z" level=warning msg="cleaning up after shim disconnected" id=2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7 namespace=k8s.io Jan 29 16:26:13.620858 containerd[1500]: time="2025-01-29T16:26:13.620840782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:13.647088 containerd[1500]: time="2025-01-29T16:26:13.645873431Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:26:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:26:13.648160 containerd[1500]: time="2025-01-29T16:26:13.647955298Z" level=info msg="TearDown network for sandbox \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" successfully" Jan 29 16:26:13.648160 containerd[1500]: time="2025-01-29T16:26:13.648006207Z" level=info msg="StopPodSandbox for \"2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7\" returns successfully" Jan 29 16:26:13.651280 containerd[1500]: time="2025-01-29T16:26:13.651244240Z" level=info msg="TearDown network for sandbox \"03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4\" successfully" Jan 29 16:26:13.651441 containerd[1500]: time="2025-01-29T16:26:13.651415453Z" level=info msg="StopPodSandbox for \"03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4\" returns successfully" Jan 29 16:26:13.723590 kubelet[2718]: I0129 16:26:13.721821 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-config-path\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.723590 kubelet[2718]: I0129 16:26:13.721879 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-run\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.723590 kubelet[2718]: I0129 16:26:13.721917 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzkrn\" (UniqueName: \"kubernetes.io/projected/7802141a-6f7c-4695-92f1-7e02e5695f67-kube-api-access-tzkrn\") pod \"7802141a-6f7c-4695-92f1-7e02e5695f67\" (UID: \"7802141a-6f7c-4695-92f1-7e02e5695f67\") " Jan 29 16:26:13.723590 kubelet[2718]: I0129 16:26:13.721944 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvgt\" (UniqueName: \"kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-kube-api-access-6vvgt\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.723590 kubelet[2718]: I0129 16:26:13.721977 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7802141a-6f7c-4695-92f1-7e02e5695f67-cilium-config-path\") pod \"7802141a-6f7c-4695-92f1-7e02e5695f67\" (UID: \"7802141a-6f7c-4695-92f1-7e02e5695f67\") " Jan 29 16:26:13.723590 kubelet[2718]: I0129 16:26:13.722003 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-cgroup\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724554 kubelet[2718]: I0129 16:26:13.722047 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c552780-7e29-4575-923d-f0e9ae8b5cce-clustermesh-secrets\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724554 kubelet[2718]: I0129 16:26:13.722075 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-bpf-maps\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724554 kubelet[2718]: I0129 16:26:13.722099 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-host-proc-sys-net\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724554 kubelet[2718]: I0129 16:26:13.722130 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-hubble-tls\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724554 kubelet[2718]: I0129 16:26:13.722156 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-host-proc-sys-kernel\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724554 kubelet[2718]: I0129 16:26:13.722179 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-hostproc\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724912 kubelet[2718]: I0129 16:26:13.722210 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-xtables-lock\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724912 kubelet[2718]: I0129 16:26:13.722235 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-lib-modules\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724912 kubelet[2718]: I0129 16:26:13.722261 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cni-path\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724912 kubelet[2718]: I0129 16:26:13.722293 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-etc-cni-netd\") pod \"4c552780-7e29-4575-923d-f0e9ae8b5cce\" (UID: \"4c552780-7e29-4575-923d-f0e9ae8b5cce\") " Jan 29 16:26:13.724912 kubelet[2718]: I0129 16:26:13.722361 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.724912 kubelet[2718]: I0129 16:26:13.722414 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.726359 kubelet[2718]: I0129 16:26:13.726281 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.728146 kubelet[2718]: I0129 16:26:13.728107 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.728269 kubelet[2718]: I0129 16:26:13.728162 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.729110 kubelet[2718]: I0129 16:26:13.729055 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.729309 kubelet[2718]: I0129 16:26:13.729285 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-hostproc" (OuterVolumeSpecName: "hostproc") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.729501 kubelet[2718]: I0129 16:26:13.729452 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.729698 kubelet[2718]: I0129 16:26:13.729604 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.729698 kubelet[2718]: I0129 16:26:13.729643 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cni-path" (OuterVolumeSpecName: "cni-path") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:13.729916 kubelet[2718]: I0129 16:26:13.729851 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:26:13.730358 kubelet[2718]: I0129 16:26:13.730282 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7802141a-6f7c-4695-92f1-7e02e5695f67-kube-api-access-tzkrn" (OuterVolumeSpecName: "kube-api-access-tzkrn") pod "7802141a-6f7c-4695-92f1-7e02e5695f67" (UID: "7802141a-6f7c-4695-92f1-7e02e5695f67"). InnerVolumeSpecName "kube-api-access-tzkrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:13.736345 kubelet[2718]: I0129 16:26:13.736224 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:13.736598 kubelet[2718]: I0129 16:26:13.736528 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7802141a-6f7c-4695-92f1-7e02e5695f67-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7802141a-6f7c-4695-92f1-7e02e5695f67" (UID: "7802141a-6f7c-4695-92f1-7e02e5695f67"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:26:13.737590 kubelet[2718]: I0129 16:26:13.737497 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c552780-7e29-4575-923d-f0e9ae8b5cce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:13.738406 kubelet[2718]: I0129 16:26:13.738355 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-kube-api-access-6vvgt" (OuterVolumeSpecName: "kube-api-access-6vvgt") pod "4c552780-7e29-4575-923d-f0e9ae8b5cce" (UID: "4c552780-7e29-4575-923d-f0e9ae8b5cce"). InnerVolumeSpecName "kube-api-access-6vvgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:13.823007 kubelet[2718]: I0129 16:26:13.822955 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-cgroup\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823007 kubelet[2718]: I0129 16:26:13.823003 2718 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c552780-7e29-4575-923d-f0e9ae8b5cce-clustermesh-secrets\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823282 kubelet[2718]: I0129 16:26:13.823040 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7802141a-6f7c-4695-92f1-7e02e5695f67-cilium-config-path\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823282 kubelet[2718]: I0129 16:26:13.823059 2718 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-host-proc-sys-net\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823282 kubelet[2718]: I0129 16:26:13.823074 2718 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-bpf-maps\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823282 kubelet[2718]: I0129 16:26:13.823088 2718 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-hubble-tls\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823282 kubelet[2718]: I0129 16:26:13.823102 2718 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-hostproc\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823282 kubelet[2718]: I0129 16:26:13.823116 2718 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-xtables-lock\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823282 kubelet[2718]: I0129 16:26:13.823133 2718 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-host-proc-sys-kernel\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823534 kubelet[2718]: I0129 16:26:13.823148 2718 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cni-path\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823534 kubelet[2718]: I0129 16:26:13.823163 2718 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-etc-cni-netd\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823534 kubelet[2718]: I0129 16:26:13.823177 2718 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-lib-modules\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823534 kubelet[2718]: I0129 16:26:13.823191 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-config-path\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823534 kubelet[2718]: I0129 16:26:13.823208 2718 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6vvgt\" (UniqueName: \"kubernetes.io/projected/4c552780-7e29-4575-923d-f0e9ae8b5cce-kube-api-access-6vvgt\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823534 kubelet[2718]: I0129 16:26:13.823224 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c552780-7e29-4575-923d-f0e9ae8b5cce-cilium-run\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:13.823534 kubelet[2718]: I0129 16:26:13.823242 2718 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tzkrn\" (UniqueName: \"kubernetes.io/projected/7802141a-6f7c-4695-92f1-7e02e5695f67-kube-api-access-tzkrn\") on node \"ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 16:26:14.260658 kubelet[2718]: I0129 16:26:14.260261 2718 scope.go:117] "RemoveContainer" containerID="9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff" Jan 29 16:26:14.263600 containerd[1500]: time="2025-01-29T16:26:14.262613269Z" level=info msg="RemoveContainer for \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\"" Jan 29 16:26:14.272378 containerd[1500]: time="2025-01-29T16:26:14.271266928Z" level=info msg="RemoveContainer for \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\" returns successfully" Jan 29 16:26:14.271442 systemd[1]: Removed slice kubepods-besteffort-pod7802141a_6f7c_4695_92f1_7e02e5695f67.slice - libcontainer container kubepods-besteffort-pod7802141a_6f7c_4695_92f1_7e02e5695f67.slice. Jan 29 16:26:14.273497 kubelet[2718]: I0129 16:26:14.273462 2718 scope.go:117] "RemoveContainer" containerID="9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff" Jan 29 16:26:14.280409 systemd[1]: Removed slice kubepods-burstable-pod4c552780_7e29_4575_923d_f0e9ae8b5cce.slice - libcontainer container kubepods-burstable-pod4c552780_7e29_4575_923d_f0e9ae8b5cce.slice. Jan 29 16:26:14.280796 systemd[1]: kubepods-burstable-pod4c552780_7e29_4575_923d_f0e9ae8b5cce.slice: Consumed 9.839s CPU time, 126.7M memory peak, 152K read from disk, 13.3M written to disk. Jan 29 16:26:14.281869 containerd[1500]: time="2025-01-29T16:26:14.281313068Z" level=error msg="ContainerStatus for \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\": not found" Jan 29 16:26:14.281961 kubelet[2718]: E0129 16:26:14.281806 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\": not found" containerID="9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff" Jan 29 16:26:14.282040 kubelet[2718]: I0129 16:26:14.281863 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff"} err="failed to get container status \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d97c9dc1275e91ddf67687ac28bf275f21f7b19d7a6686f864949768930fdff\": not found" Jan 29 16:26:14.282040 kubelet[2718]: I0129 16:26:14.282011 2718 scope.go:117] "RemoveContainer" containerID="a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c" Jan 29 16:26:14.292948 containerd[1500]: time="2025-01-29T16:26:14.292809813Z" level=info msg="RemoveContainer for \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\"" Jan 29 16:26:14.299341 containerd[1500]: time="2025-01-29T16:26:14.299278591Z" level=info msg="RemoveContainer for \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\" returns successfully" Jan 29 16:26:14.299554 kubelet[2718]: I0129 16:26:14.299512 2718 scope.go:117] "RemoveContainer" containerID="064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55" Jan 29 16:26:14.303442 containerd[1500]: time="2025-01-29T16:26:14.303392924Z" level=info msg="RemoveContainer for \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\"" Jan 29 16:26:14.310763 containerd[1500]: time="2025-01-29T16:26:14.310581072Z" level=info msg="RemoveContainer for \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\" returns successfully" Jan 29 16:26:14.311456 kubelet[2718]: I0129 16:26:14.311373 2718 scope.go:117] "RemoveContainer" containerID="afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb" Jan 29 16:26:14.314816 containerd[1500]: time="2025-01-29T16:26:14.314367271Z" level=info msg="RemoveContainer for \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\"" Jan 29 16:26:14.319234 containerd[1500]: time="2025-01-29T16:26:14.319191125Z" level=info msg="RemoveContainer for \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\" returns successfully" Jan 29 16:26:14.319659 kubelet[2718]: I0129 16:26:14.319616 2718 scope.go:117] "RemoveContainer" containerID="f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02" Jan 29 16:26:14.322706 containerd[1500]: time="2025-01-29T16:26:14.322670120Z" level=info msg="RemoveContainer for \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\"" Jan 29 16:26:14.328348 containerd[1500]: time="2025-01-29T16:26:14.328309845Z" level=info msg="RemoveContainer for \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\" returns successfully" Jan 29 16:26:14.328643 kubelet[2718]: I0129 16:26:14.328618 2718 scope.go:117] "RemoveContainer" containerID="774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5" Jan 29 16:26:14.330200 containerd[1500]: time="2025-01-29T16:26:14.330056560Z" level=info msg="RemoveContainer for \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\"" Jan 29 16:26:14.334618 containerd[1500]: time="2025-01-29T16:26:14.334579109Z" level=info msg="RemoveContainer for \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\" returns successfully" Jan 29 16:26:14.334978 kubelet[2718]: I0129 16:26:14.334934 2718 scope.go:117] "RemoveContainer" containerID="a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c" Jan 29 16:26:14.335310 containerd[1500]: time="2025-01-29T16:26:14.335243092Z" level=error msg="ContainerStatus for \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\": not found" Jan 29 16:26:14.335514 kubelet[2718]: E0129 16:26:14.335419 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\": not found" containerID="a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c" Jan 29 16:26:14.335705 kubelet[2718]: I0129 16:26:14.335529 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c"} err="failed to get container status \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a63a5ba82047ce64f1caed3d8c6a54579ba844a53c76721104573faa488c9d4c\": not found" Jan 29 16:26:14.335705 kubelet[2718]: I0129 16:26:14.335586 2718 scope.go:117] "RemoveContainer" containerID="064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55" Jan 29 16:26:14.335973 containerd[1500]: time="2025-01-29T16:26:14.335791939Z" level=error msg="ContainerStatus for \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\": not found" Jan 29 16:26:14.336120 kubelet[2718]: E0129 16:26:14.335944 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\": not found" containerID="064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55" Jan 29 16:26:14.336120 kubelet[2718]: I0129 16:26:14.336002 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55"} err="failed to get container status \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\": rpc error: code = NotFound desc = an error occurred when try to find container \"064c2349725ef1162f0ad2f52081e103bf11bbd7a733984d46641177f01aea55\": not found" Jan 29 16:26:14.336120 kubelet[2718]: I0129 16:26:14.336031 2718 scope.go:117] "RemoveContainer" containerID="afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb" Jan 29 16:26:14.336372 containerd[1500]: time="2025-01-29T16:26:14.336339652Z" level=error msg="ContainerStatus for \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\": not found" Jan 29 16:26:14.336603 kubelet[2718]: E0129 16:26:14.336547 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\": not found" containerID="afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb" Jan 29 16:26:14.336695 kubelet[2718]: I0129 16:26:14.336605 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb"} err="failed to get container status \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"afa71e9f7801290cc4c93a03d07e4e86173a6a30362ebba44d8b52cf93c125fb\": not found" Jan 29 16:26:14.336695 kubelet[2718]: I0129 16:26:14.336631 2718 scope.go:117] "RemoveContainer" containerID="f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02" Jan 29 16:26:14.336967 containerd[1500]: time="2025-01-29T16:26:14.336915199Z" level=error msg="ContainerStatus for \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\": not found" Jan 29 16:26:14.337092 kubelet[2718]: E0129 16:26:14.337067 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\": not found" containerID="f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02" Jan 29 16:26:14.337167 kubelet[2718]: I0129 16:26:14.337107 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02"} err="failed to get container status \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\": rpc error: code = NotFound desc = an error occurred when try to find container \"f116e57d8e351f11e00d5016831b9653d1420426e87da6b5f0538f044f64ff02\": not found" Jan 29 16:26:14.337167 kubelet[2718]: I0129 16:26:14.337137 2718 scope.go:117] "RemoveContainer" containerID="774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5" Jan 29 16:26:14.337540 containerd[1500]: time="2025-01-29T16:26:14.337501762Z" level=error msg="ContainerStatus for \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\": not found" Jan 29 16:26:14.337848 kubelet[2718]: E0129 16:26:14.337703 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\": not found" containerID="774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5" Jan 29 16:26:14.337848 kubelet[2718]: I0129 16:26:14.337732 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5"} err="failed to get container status \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\": rpc error: code = NotFound desc = an error occurred when try to find container \"774b966085ca9235e6744bc5a3a2a35f2ca139ae3737656e3fcf5eedad384ef5\": not found" Jan 29 16:26:14.394308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7-rootfs.mount: Deactivated successfully. Jan 29 16:26:14.394750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03d8c7be62fb4625f6417721e41d80f317b24df53e12fdf00b8159d7a23534e4-rootfs.mount: Deactivated successfully. Jan 29 16:26:14.395088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2deee257931eaffba902d168e9b9e62b7f1d9e179506208d108a02e5419db2d7-shm.mount: Deactivated successfully. Jan 29 16:26:14.395227 systemd[1]: var-lib-kubelet-pods-7802141a\x2d6f7c\x2d4695\x2d92f1\x2d7e02e5695f67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtzkrn.mount: Deactivated successfully. Jan 29 16:26:14.395342 systemd[1]: var-lib-kubelet-pods-4c552780\x2d7e29\x2d4575\x2d923d\x2df0e9ae8b5cce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6vvgt.mount: Deactivated successfully. Jan 29 16:26:14.395469 systemd[1]: var-lib-kubelet-pods-4c552780\x2d7e29\x2d4575\x2d923d\x2df0e9ae8b5cce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:26:14.395613 systemd[1]: var-lib-kubelet-pods-4c552780\x2d7e29\x2d4575\x2d923d\x2df0e9ae8b5cce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:26:14.800782 kubelet[2718]: I0129 16:26:14.800723 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c552780-7e29-4575-923d-f0e9ae8b5cce" path="/var/lib/kubelet/pods/4c552780-7e29-4575-923d-f0e9ae8b5cce/volumes" Jan 29 16:26:14.802040 kubelet[2718]: I0129 16:26:14.801968 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7802141a-6f7c-4695-92f1-7e02e5695f67" path="/var/lib/kubelet/pods/7802141a-6f7c-4695-92f1-7e02e5695f67/volumes" Jan 29 16:26:15.357502 sshd[4330]: Connection closed by 147.75.109.163 port 59264 Jan 29 16:26:15.358552 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:15.366757 systemd[1]: sshd@23-10.128.0.57:22-147.75.109.163:59264.service: Deactivated successfully. Jan 29 16:26:15.369848 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:26:15.370214 systemd[1]: session-23.scope: Consumed 1.020s CPU time, 23.8M memory peak. Jan 29 16:26:15.371045 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:26:15.372863 systemd-logind[1483]: Removed session 23. Jan 29 16:26:15.415002 systemd[1]: Started sshd@24-10.128.0.57:22-147.75.109.163:59266.service - OpenSSH per-connection server daemon (147.75.109.163:59266). Jan 29 16:26:15.712485 sshd[4497]: Accepted publickey for core from 147.75.109.163 port 59266 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:26:15.714331 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:15.721483 systemd-logind[1483]: New session 24 of user core. Jan 29 16:26:15.726805 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:26:16.034490 ntpd[1467]: Deleting interface #12 lxc_health, fe80::f0b2:3dff:fe26:5ed3%8#123, interface stats: received=0, sent=0, dropped=0, active_time=101 secs Jan 29 16:26:16.035272 ntpd[1467]: 29 Jan 16:26:16 ntpd[1467]: Deleting interface #12 lxc_health, fe80::f0b2:3dff:fe26:5ed3%8#123, interface stats: received=0, sent=0, dropped=0, active_time=101 secs Jan 29 16:26:16.956947 kubelet[2718]: I0129 16:26:16.956494 2718 topology_manager.go:215] "Topology Admit Handler" podUID="79cc660f-b88e-457c-8685-6807cd65a6ef" podNamespace="kube-system" podName="cilium-phf46" Jan 29 16:26:16.957533 kubelet[2718]: E0129 16:26:16.957056 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c552780-7e29-4575-923d-f0e9ae8b5cce" containerName="cilium-agent" Jan 29 16:26:16.957533 kubelet[2718]: E0129 16:26:16.957080 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c552780-7e29-4575-923d-f0e9ae8b5cce" containerName="mount-cgroup" Jan 29 16:26:16.957533 kubelet[2718]: E0129 16:26:16.957092 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c552780-7e29-4575-923d-f0e9ae8b5cce" containerName="mount-bpf-fs" Jan 29 16:26:16.957533 kubelet[2718]: E0129 16:26:16.957102 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c552780-7e29-4575-923d-f0e9ae8b5cce" containerName="clean-cilium-state" Jan 29 16:26:16.957533 kubelet[2718]: E0129 16:26:16.957114 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c552780-7e29-4575-923d-f0e9ae8b5cce" containerName="apply-sysctl-overwrites" Jan 29 16:26:16.957533 kubelet[2718]: E0129 16:26:16.957126 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7802141a-6f7c-4695-92f1-7e02e5695f67" containerName="cilium-operator" Jan 29 16:26:16.957533 kubelet[2718]: I0129 16:26:16.957169 2718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7802141a-6f7c-4695-92f1-7e02e5695f67" containerName="cilium-operator" Jan 29 16:26:16.957533 kubelet[2718]: I0129 16:26:16.957181 2718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c552780-7e29-4575-923d-f0e9ae8b5cce" containerName="cilium-agent" Jan 29 16:26:16.963612 sshd[4499]: Connection closed by 147.75.109.163 port 59266 Jan 29 16:26:16.966904 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:16.984421 systemd[1]: Created slice kubepods-burstable-pod79cc660f_b88e_457c_8685_6807cd65a6ef.slice - libcontainer container kubepods-burstable-pod79cc660f_b88e_457c_8685_6807cd65a6ef.slice. Jan 29 16:26:16.986738 systemd[1]: sshd@24-10.128.0.57:22-147.75.109.163:59266.service: Deactivated successfully. Jan 29 16:26:16.993527 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:26:16.997245 systemd[1]: session-24.scope: Consumed 1.001s CPU time, 23.8M memory peak. Jan 29 16:26:16.999596 systemd-logind[1483]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:26:17.034921 systemd[1]: Started sshd@25-10.128.0.57:22-147.75.109.163:59270.service - OpenSSH per-connection server daemon (147.75.109.163:59270). Jan 29 16:26:17.037667 systemd-logind[1483]: Removed session 24. Jan 29 16:26:17.044592 kubelet[2718]: I0129 16:26:17.043092 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-lib-modules\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.044592 kubelet[2718]: I0129 16:26:17.043146 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79cc660f-b88e-457c-8685-6807cd65a6ef-cilium-config-path\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.044592 kubelet[2718]: I0129 16:26:17.043184 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-host-proc-sys-net\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.044592 kubelet[2718]: I0129 16:26:17.043214 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-hostproc\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.044592 kubelet[2718]: I0129 16:26:17.043243 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-etc-cni-netd\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.044592 kubelet[2718]: I0129 16:26:17.043270 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/79cc660f-b88e-457c-8685-6807cd65a6ef-cilium-ipsec-secrets\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045054 kubelet[2718]: I0129 16:26:17.043298 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79cc660f-b88e-457c-8685-6807cd65a6ef-hubble-tls\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045054 kubelet[2718]: I0129 16:26:17.043339 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-cilium-run\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045054 kubelet[2718]: I0129 16:26:17.043828 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-cni-path\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045054 kubelet[2718]: I0129 16:26:17.044110 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-bpf-maps\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045054 kubelet[2718]: I0129 16:26:17.044156 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jczm\" (UniqueName: \"kubernetes.io/projected/79cc660f-b88e-457c-8685-6807cd65a6ef-kube-api-access-2jczm\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045054 kubelet[2718]: I0129 16:26:17.044642 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-cilium-cgroup\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045362 kubelet[2718]: I0129 16:26:17.044699 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-xtables-lock\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045362 kubelet[2718]: I0129 16:26:17.044727 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79cc660f-b88e-457c-8685-6807cd65a6ef-clustermesh-secrets\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.045362 kubelet[2718]: I0129 16:26:17.044886 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79cc660f-b88e-457c-8685-6807cd65a6ef-host-proc-sys-kernel\") pod \"cilium-phf46\" (UID: \"79cc660f-b88e-457c-8685-6807cd65a6ef\") " pod="kube-system/cilium-phf46" Jan 29 16:26:17.050614 kubelet[2718]: E0129 16:26:17.050407 2718 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:26:17.326391 containerd[1500]: time="2025-01-29T16:26:17.326241519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-phf46,Uid:79cc660f-b88e-457c-8685-6807cd65a6ef,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:17.364917 containerd[1500]: time="2025-01-29T16:26:17.362956480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:17.364917 containerd[1500]: time="2025-01-29T16:26:17.363046544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:17.364917 containerd[1500]: time="2025-01-29T16:26:17.363067938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:17.364917 containerd[1500]: time="2025-01-29T16:26:17.363201126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:17.380195 sshd[4508]: Accepted publickey for core from 147.75.109.163 port 59270 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:26:17.382919 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:17.399915 systemd[1]: Started cri-containerd-a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85.scope - libcontainer container a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85. Jan 29 16:26:17.406896 systemd-logind[1483]: New session 25 of user core. Jan 29 16:26:17.412049 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:26:17.442422 containerd[1500]: time="2025-01-29T16:26:17.442288144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-phf46,Uid:79cc660f-b88e-457c-8685-6807cd65a6ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\"" Jan 29 16:26:17.447745 containerd[1500]: time="2025-01-29T16:26:17.446681232Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:26:17.464085 containerd[1500]: time="2025-01-29T16:26:17.463959511Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6827fd1e09d864eca8e2bae86c7cdff24ebfd851e90631e54f9c1bac8971a0cc\"" Jan 29 16:26:17.465119 containerd[1500]: time="2025-01-29T16:26:17.464897821Z" level=info msg="StartContainer for \"6827fd1e09d864eca8e2bae86c7cdff24ebfd851e90631e54f9c1bac8971a0cc\"" Jan 29 16:26:17.501787 systemd[1]: Started cri-containerd-6827fd1e09d864eca8e2bae86c7cdff24ebfd851e90631e54f9c1bac8971a0cc.scope - libcontainer container 6827fd1e09d864eca8e2bae86c7cdff24ebfd851e90631e54f9c1bac8971a0cc. Jan 29 16:26:17.536193 containerd[1500]: time="2025-01-29T16:26:17.536141524Z" level=info msg="StartContainer for \"6827fd1e09d864eca8e2bae86c7cdff24ebfd851e90631e54f9c1bac8971a0cc\" returns successfully" Jan 29 16:26:17.557503 systemd[1]: cri-containerd-6827fd1e09d864eca8e2bae86c7cdff24ebfd851e90631e54f9c1bac8971a0cc.scope: Deactivated successfully. Jan 29 16:26:17.596098 sshd[4548]: Connection closed by 147.75.109.163 port 59270 Jan 29 16:26:17.596906 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:17.603881 systemd[1]: sshd@25-10.128.0.57:22-147.75.109.163:59270.service: Deactivated successfully. Jan 29 16:26:17.607214 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:26:17.608524 systemd-logind[1483]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:26:17.610069 systemd-logind[1483]: Removed session 25. Jan 29 16:26:17.614808 containerd[1500]: time="2025-01-29T16:26:17.614649513Z" level=info msg="shim disconnected" id=6827fd1e09d864eca8e2bae86c7cdff24ebfd851e90631e54f9c1bac8971a0cc namespace=k8s.io Jan 29 16:26:17.614808 containerd[1500]: time="2025-01-29T16:26:17.614743903Z" level=warning msg="cleaning up after shim disconnected" id=6827fd1e09d864eca8e2bae86c7cdff24ebfd851e90631e54f9c1bac8971a0cc namespace=k8s.io Jan 29 16:26:17.615866 containerd[1500]: time="2025-01-29T16:26:17.614776160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:17.656035 systemd[1]: Started sshd@26-10.128.0.57:22-147.75.109.163:54376.service - OpenSSH per-connection server daemon (147.75.109.163:54376). Jan 29 16:26:17.953026 sshd[4624]: Accepted publickey for core from 147.75.109.163 port 54376 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:26:17.954891 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:17.961662 systemd-logind[1483]: New session 26 of user core. Jan 29 16:26:17.967846 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:26:18.284575 containerd[1500]: time="2025-01-29T16:26:18.284411500Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:26:18.307376 containerd[1500]: time="2025-01-29T16:26:18.305418516Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98\"" Jan 29 16:26:18.308024 containerd[1500]: time="2025-01-29T16:26:18.307979394Z" level=info msg="StartContainer for \"bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98\"" Jan 29 16:26:18.363282 systemd[1]: Started cri-containerd-bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98.scope - libcontainer container bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98. Jan 29 16:26:18.401245 containerd[1500]: time="2025-01-29T16:26:18.401084278Z" level=info msg="StartContainer for \"bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98\" returns successfully" Jan 29 16:26:18.409017 systemd[1]: cri-containerd-bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98.scope: Deactivated successfully. Jan 29 16:26:18.444271 containerd[1500]: time="2025-01-29T16:26:18.443833213Z" level=info msg="shim disconnected" id=bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98 namespace=k8s.io Jan 29 16:26:18.444271 containerd[1500]: time="2025-01-29T16:26:18.443911920Z" level=warning msg="cleaning up after shim disconnected" id=bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98 namespace=k8s.io Jan 29 16:26:18.444271 containerd[1500]: time="2025-01-29T16:26:18.443926500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:19.153973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbbcc6489394dee364954146f59a7c4a244b59f4e7026f4ae2c882219465ac98-rootfs.mount: Deactivated successfully. Jan 29 16:26:19.289596 containerd[1500]: time="2025-01-29T16:26:19.289522999Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:26:19.314725 containerd[1500]: time="2025-01-29T16:26:19.313048021Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508\"" Jan 29 16:26:19.316037 containerd[1500]: time="2025-01-29T16:26:19.315999619Z" level=info msg="StartContainer for \"1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508\"" Jan 29 16:26:19.373768 systemd[1]: Started cri-containerd-1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508.scope - libcontainer container 1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508. Jan 29 16:26:19.422144 containerd[1500]: time="2025-01-29T16:26:19.421765895Z" level=info msg="StartContainer for \"1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508\" returns successfully" Jan 29 16:26:19.426262 systemd[1]: cri-containerd-1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508.scope: Deactivated successfully. Jan 29 16:26:19.463124 containerd[1500]: time="2025-01-29T16:26:19.463036325Z" level=info msg="shim disconnected" id=1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508 namespace=k8s.io Jan 29 16:26:19.463396 containerd[1500]: time="2025-01-29T16:26:19.463242867Z" level=warning msg="cleaning up after shim disconnected" id=1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508 namespace=k8s.io Jan 29 16:26:19.463396 containerd[1500]: time="2025-01-29T16:26:19.463266969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:19.966050 kubelet[2718]: I0129 16:26:19.965717 2718 setters.go:580] "Node became not ready" node="ci-4230-0-0-4935a9fd1f58bea9bc41.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:26:19Z","lastTransitionTime":"2025-01-29T16:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:26:20.154286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eababaa8bb7b4d79f0eee4732e41bd68f3459ee91b013db76e9402322c0a508-rootfs.mount: Deactivated successfully. Jan 29 16:26:20.295758 containerd[1500]: time="2025-01-29T16:26:20.295626907Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:26:20.322094 containerd[1500]: time="2025-01-29T16:26:20.321992638Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3\"" Jan 29 16:26:20.322505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093560000.mount: Deactivated successfully. Jan 29 16:26:20.328390 containerd[1500]: time="2025-01-29T16:26:20.325408001Z" level=info msg="StartContainer for \"3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3\"" Jan 29 16:26:20.384241 systemd[1]: Started cri-containerd-3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3.scope - libcontainer container 3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3. Jan 29 16:26:20.429112 systemd[1]: cri-containerd-3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3.scope: Deactivated successfully. Jan 29 16:26:20.434038 containerd[1500]: time="2025-01-29T16:26:20.433896174Z" level=info msg="StartContainer for \"3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3\" returns successfully" Jan 29 16:26:20.466609 containerd[1500]: time="2025-01-29T16:26:20.466465169Z" level=info msg="shim disconnected" id=3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3 namespace=k8s.io Jan 29 16:26:20.467009 containerd[1500]: time="2025-01-29T16:26:20.466584058Z" level=warning msg="cleaning up after shim disconnected" id=3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3 namespace=k8s.io Jan 29 16:26:20.467009 containerd[1500]: time="2025-01-29T16:26:20.466632638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:21.154222 systemd[1]: run-containerd-runc-k8s.io-3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3-runc.Sap7UZ.mount: Deactivated successfully. Jan 29 16:26:21.154409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d4c4df3073c89aea4e243d90419c449c12ea9925d4d55c950a227a57d3c40e3-rootfs.mount: Deactivated successfully. Jan 29 16:26:21.301460 containerd[1500]: time="2025-01-29T16:26:21.301290084Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:26:21.329849 containerd[1500]: time="2025-01-29T16:26:21.328363164Z" level=info msg="CreateContainer within sandbox \"a7d669138c7837bf116d188b7aff8f40dc4a0624b654ce603e5b701b965ccd85\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6bf9112c438dcc9082d3e5fec66a5e33dc13d6f0366f2f75eb9e191a669b86e9\"" Jan 29 16:26:21.333585 containerd[1500]: time="2025-01-29T16:26:21.332054335Z" level=info msg="StartContainer for \"6bf9112c438dcc9082d3e5fec66a5e33dc13d6f0366f2f75eb9e191a669b86e9\"" Jan 29 16:26:21.333281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2486731647.mount: Deactivated successfully. Jan 29 16:26:21.385792 systemd[1]: Started cri-containerd-6bf9112c438dcc9082d3e5fec66a5e33dc13d6f0366f2f75eb9e191a669b86e9.scope - libcontainer container 6bf9112c438dcc9082d3e5fec66a5e33dc13d6f0366f2f75eb9e191a669b86e9. Jan 29 16:26:21.426319 containerd[1500]: time="2025-01-29T16:26:21.426057491Z" level=info msg="StartContainer for \"6bf9112c438dcc9082d3e5fec66a5e33dc13d6f0366f2f75eb9e191a669b86e9\" returns successfully" Jan 29 16:26:22.004606 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:26:25.361075 systemd-networkd[1350]: lxc_health: Link UP Jan 29 16:26:25.368590 systemd-networkd[1350]: lxc_health: Gained carrier Jan 29 16:26:26.673671 systemd-networkd[1350]: lxc_health: Gained IPv6LL Jan 29 16:26:26.680928 systemd[1]: run-containerd-runc-k8s.io-6bf9112c438dcc9082d3e5fec66a5e33dc13d6f0366f2f75eb9e191a669b86e9-runc.hksDhN.mount: Deactivated successfully. Jan 29 16:26:27.361919 kubelet[2718]: I0129 16:26:27.361497 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-phf46" podStartSLOduration=11.361471197 podStartE2EDuration="11.361471197s" podCreationTimestamp="2025-01-29 16:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:22.328461611 +0000 UTC m=+145.746518727" watchObservedRunningTime="2025-01-29 16:26:27.361471197 +0000 UTC m=+150.779528312" Jan 29 16:26:29.034653 ntpd[1467]: Listen normally on 15 lxc_health [fe80::8035:89ff:feb4:70a6%14]:123 Jan 29 16:26:29.035383 ntpd[1467]: 29 Jan 16:26:29 ntpd[1467]: Listen normally on 15 lxc_health [fe80::8035:89ff:feb4:70a6%14]:123 Jan 29 16:26:31.148433 systemd[1]: run-containerd-runc-k8s.io-6bf9112c438dcc9082d3e5fec66a5e33dc13d6f0366f2f75eb9e191a669b86e9-runc.evhf82.mount: Deactivated successfully. Jan 29 16:26:31.294734 sshd[4626]: Connection closed by 147.75.109.163 port 54376 Jan 29 16:26:31.295933 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:31.304476 systemd[1]: sshd@26-10.128.0.57:22-147.75.109.163:54376.service: Deactivated successfully. Jan 29 16:26:31.309941 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:26:31.314180 systemd-logind[1483]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:26:31.316862 systemd-logind[1483]: Removed session 26.