Mar 17 17:49:45.126008 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:49:45.126059 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:49:45.126079 kernel: BIOS-provided physical RAM map: Mar 17 17:49:45.126094 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 17 17:49:45.126108 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 17 17:49:45.126122 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 17 17:49:45.126140 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 17 17:49:45.126155 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 17 17:49:45.126174 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd326fff] usable Mar 17 17:49:45.126188 kernel: BIOS-e820: [mem 0x00000000bd327000-0x00000000bd32efff] ACPI data Mar 17 17:49:45.126203 kernel: BIOS-e820: [mem 0x00000000bd32f000-0x00000000bf8ecfff] usable Mar 17 17:49:45.126219 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Mar 17 17:49:45.126234 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 17 17:49:45.126249 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 17 17:49:45.126274 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 17 17:49:45.126291 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 17 17:49:45.126307 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 17 17:49:45.126324 kernel: NX (Execute Disable) protection: active Mar 17 17:49:45.126340 kernel: APIC: Static calls initialized Mar 17 17:49:45.126357 kernel: efi: EFI v2.7 by EDK II Mar 17 17:49:45.126375 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd327018 Mar 17 17:49:45.126391 kernel: random: crng init done Mar 17 17:49:45.126408 kernel: secureboot: Secure boot disabled Mar 17 17:49:45.126424 kernel: SMBIOS 2.4 present. Mar 17 17:49:45.126452 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Mar 17 17:49:45.126469 kernel: Hypervisor detected: KVM Mar 17 17:49:45.126485 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:49:45.126502 kernel: kvm-clock: using sched offset of 13036427458 cycles Mar 17 17:49:45.126520 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:49:45.126538 kernel: tsc: Detected 2299.998 MHz processor Mar 17 17:49:45.126555 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:49:45.126573 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:49:45.126590 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 17 17:49:45.126607 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 17 17:49:45.126628 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:49:45.126645 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 17 17:49:45.126662 kernel: Using GB pages for direct mapping Mar 17 17:49:45.126679 kernel: ACPI: Early table checksum verification disabled Mar 17 17:49:45.126697 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 17 17:49:45.126714 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 17 17:49:45.126739 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 17 17:49:45.126761 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 17 17:49:45.126779 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 17 17:49:45.126797 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Mar 17 17:49:45.126816 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 17 17:49:45.126834 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 17 17:49:45.126852 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 17 17:49:45.126870 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 17 17:49:45.129397 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 17 17:49:45.129422 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 17 17:49:45.129447 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 17 17:49:45.129465 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 17 17:49:45.129483 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 17 17:49:45.129500 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 17 17:49:45.129518 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 17 17:49:45.129536 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 17 17:49:45.129560 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 17 17:49:45.129578 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 17 17:49:45.129596 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:49:45.129614 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:49:45.129632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 17:49:45.129649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 17 17:49:45.129667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 17 17:49:45.129684 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 17 17:49:45.129703 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 17 17:49:45.129725 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Mar 17 17:49:45.129743 kernel: Zone ranges: Mar 17 17:49:45.129761 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:49:45.129779 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 17:49:45.129797 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 17 17:49:45.129815 kernel: Movable zone start for each node Mar 17 17:49:45.129832 kernel: Early memory node ranges Mar 17 17:49:45.129849 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 17 17:49:45.129867 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 17 17:49:45.129944 kernel: node 0: [mem 0x0000000000100000-0x00000000bd326fff] Mar 17 17:49:45.129968 kernel: node 0: [mem 0x00000000bd32f000-0x00000000bf8ecfff] Mar 17 17:49:45.129986 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 17 17:49:45.130005 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 17 17:49:45.130023 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 17 17:49:45.130042 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:49:45.130060 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 17 17:49:45.130078 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 17 17:49:45.130097 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Mar 17 17:49:45.130115 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 17 17:49:45.130138 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 17 17:49:45.130155 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 17 17:49:45.130173 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:49:45.130191 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:49:45.130209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:49:45.130227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:49:45.130245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:49:45.130264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:49:45.130282 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:49:45.130304 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:49:45.130321 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 17 17:49:45.130339 kernel: Booting paravirtualized kernel on KVM Mar 17 17:49:45.130358 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:49:45.130376 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:49:45.130395 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:49:45.130413 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:49:45.130431 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:49:45.130455 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:49:45.130477 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:49:45.130498 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:49:45.130517 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:49:45.130535 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 17 17:49:45.130553 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:49:45.130571 kernel: Fallback order for Node 0: 0 Mar 17 17:49:45.130589 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Mar 17 17:49:45.130607 kernel: Policy zone: Normal Mar 17 17:49:45.130629 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:49:45.130647 kernel: software IO TLB: area num 2. Mar 17 17:49:45.130666 kernel: Memory: 7511312K/7860552K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 348984K reserved, 0K cma-reserved) Mar 17 17:49:45.130684 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:49:45.130702 kernel: Kernel/User page tables isolation: enabled Mar 17 17:49:45.130720 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:49:45.130738 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:49:45.130756 kernel: Dynamic Preempt: voluntary Mar 17 17:49:45.130792 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:49:45.130812 kernel: rcu: RCU event tracing is enabled. Mar 17 17:49:45.130832 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:49:45.130851 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:49:45.130874 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:49:45.130940 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:49:45.130959 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:49:45.130978 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:49:45.130998 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:49:45.131022 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:49:45.131041 kernel: Console: colour dummy device 80x25 Mar 17 17:49:45.131061 kernel: printk: console [ttyS0] enabled Mar 17 17:49:45.131080 kernel: ACPI: Core revision 20230628 Mar 17 17:49:45.131099 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:49:45.131118 kernel: x2apic enabled Mar 17 17:49:45.131138 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:49:45.131157 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 17 17:49:45.131177 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 17 17:49:45.131200 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 17 17:49:45.131219 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 17 17:49:45.131238 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 17 17:49:45.131257 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:49:45.131277 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 17 17:49:45.131294 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 17 17:49:45.131312 kernel: Spectre V2 : Mitigation: IBRS Mar 17 17:49:45.131330 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:49:45.131349 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:49:45.131374 kernel: RETBleed: Mitigation: IBRS Mar 17 17:49:45.131392 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:49:45.131411 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 17 17:49:45.131431 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:49:45.131456 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 17:49:45.131475 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:49:45.131494 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:49:45.131514 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:49:45.131538 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:49:45.131557 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:49:45.131577 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 17:49:45.131597 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:49:45.131616 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:49:45.131635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:49:45.131654 kernel: landlock: Up and running. Mar 17 17:49:45.131673 kernel: SELinux: Initializing. Mar 17 17:49:45.131692 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:49:45.131715 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:49:45.131734 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 17 17:49:45.131754 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:49:45.131775 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:49:45.131795 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:49:45.131815 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 17 17:49:45.131835 kernel: signal: max sigframe size: 1776 Mar 17 17:49:45.131856 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:49:45.131888 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:49:45.131911 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:49:45.131930 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:49:45.131949 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:49:45.131967 kernel: .... node #0, CPUs: #1 Mar 17 17:49:45.131987 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 17 17:49:45.132007 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 17 17:49:45.132025 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:49:45.132044 kernel: smpboot: Max logical packages: 1 Mar 17 17:49:45.132062 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 17 17:49:45.132085 kernel: devtmpfs: initialized Mar 17 17:49:45.132104 kernel: x86/mm: Memory block size: 128MB Mar 17 17:49:45.132123 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 17 17:49:45.132142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:49:45.132162 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:49:45.132181 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:49:45.132199 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:49:45.132217 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:49:45.132236 kernel: audit: type=2000 audit(1742233783.288:1): state=initialized audit_enabled=0 res=1 Mar 17 17:49:45.132259 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:49:45.132278 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:49:45.132297 kernel: cpuidle: using governor menu Mar 17 17:49:45.132316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:49:45.132335 kernel: dca service started, version 1.12.1 Mar 17 17:49:45.132354 kernel: PCI: Using configuration type 1 for base access Mar 17 17:49:45.132373 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:49:45.132392 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:49:45.132414 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:49:45.132438 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:49:45.132457 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:49:45.132476 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:49:45.132495 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:49:45.132514 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:49:45.132533 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:49:45.132552 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 17 17:49:45.132571 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:49:45.132590 kernel: ACPI: Interpreter enabled Mar 17 17:49:45.132612 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:49:45.132630 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:49:45.132649 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:49:45.132668 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 17 17:49:45.132687 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 17 17:49:45.132706 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:49:45.134805 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:49:45.135067 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 17:49:45.135286 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 17:49:45.135312 kernel: PCI host bridge to bus 0000:00 Mar 17 17:49:45.135519 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:49:45.135703 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:49:45.137729 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:49:45.137981 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 17 17:49:45.138164 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:49:45.138371 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 17:49:45.138585 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 17 17:49:45.138785 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 17:49:45.139620 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 17 17:49:45.140251 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 17 17:49:45.140528 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 17 17:49:45.140722 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 17 17:49:45.140949 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:49:45.141144 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 17 17:49:45.141332 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 17 17:49:45.141537 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:49:45.141728 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 17 17:49:45.144118 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 17 17:49:45.144151 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:49:45.144172 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:49:45.144192 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:49:45.144211 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:49:45.144230 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 17:49:45.144250 kernel: iommu: Default domain type: Translated Mar 17 17:49:45.144270 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:49:45.144289 kernel: efivars: Registered efivars operations Mar 17 17:49:45.144314 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:49:45.144333 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:49:45.144352 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 17 17:49:45.144371 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 17 17:49:45.144390 kernel: e820: reserve RAM buffer [mem 0xbd327000-0xbfffffff] Mar 17 17:49:45.144409 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 17 17:49:45.144428 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 17 17:49:45.144454 kernel: vgaarb: loaded Mar 17 17:49:45.144474 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:49:45.144497 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:49:45.144517 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:49:45.144535 kernel: pnp: PnP ACPI init Mar 17 17:49:45.144555 kernel: pnp: PnP ACPI: found 7 devices Mar 17 17:49:45.144575 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:49:45.144594 kernel: NET: Registered PF_INET protocol family Mar 17 17:49:45.144614 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:49:45.144634 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 17 17:49:45.144653 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:49:45.144676 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:49:45.144696 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 17 17:49:45.144715 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 17 17:49:45.144734 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 17:49:45.144754 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 17 17:49:45.144789 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:49:45.144809 kernel: NET: Registered PF_XDP protocol family Mar 17 17:49:45.145020 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:49:45.145198 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:49:45.145373 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:49:45.145546 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 17 17:49:45.145737 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 17:49:45.145763 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:49:45.145783 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 17:49:45.145803 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 17 17:49:45.145822 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:49:45.145847 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 17 17:49:45.145867 kernel: clocksource: Switched to clocksource tsc Mar 17 17:49:45.146244 kernel: Initialise system trusted keyrings Mar 17 17:49:45.146400 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 17 17:49:45.146420 kernel: Key type asymmetric registered Mar 17 17:49:45.146445 kernel: Asymmetric key parser 'x509' registered Mar 17 17:49:45.146464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:49:45.146610 kernel: io scheduler mq-deadline registered Mar 17 17:49:45.146629 kernel: io scheduler kyber registered Mar 17 17:49:45.146655 kernel: io scheduler bfq registered Mar 17 17:49:45.146674 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:49:45.146824 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 17:49:45.147309 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 17 17:49:45.147425 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 17 17:49:45.147685 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 17 17:49:45.147712 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 17:49:45.147933 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 17 17:49:45.147964 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:49:45.147984 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:49:45.148003 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 17 17:49:45.148021 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 17 17:49:45.148040 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 17 17:49:45.148228 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 17 17:49:45.148253 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:49:45.148272 kernel: i8042: Warning: Keylock active Mar 17 17:49:45.148295 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:49:45.148314 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:49:45.148503 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 17 17:49:45.148675 kernel: rtc_cmos 00:00: registered as rtc0 Mar 17 17:49:45.148849 kernel: rtc_cmos 00:00: setting system clock to 2025-03-17T17:49:44 UTC (1742233784) Mar 17 17:49:45.149034 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 17 17:49:45.149057 kernel: intel_pstate: CPU model not supported Mar 17 17:49:45.149076 kernel: pstore: Using crash dump compression: deflate Mar 17 17:49:45.149099 kernel: pstore: Registered efi_pstore as persistent store backend Mar 17 17:49:45.149117 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:49:45.149136 kernel: Segment Routing with IPv6 Mar 17 17:49:45.149154 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:49:45.149172 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:49:45.149191 kernel: Key type dns_resolver registered Mar 17 17:49:45.149209 kernel: IPI shorthand broadcast: enabled Mar 17 17:49:45.149228 kernel: sched_clock: Marking stable (839004302, 155766957)->(1029062250, -34290991) Mar 17 17:49:45.149246 kernel: registered taskstats version 1 Mar 17 17:49:45.149267 kernel: Loading compiled-in X.509 certificates Mar 17 17:49:45.149286 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:49:45.149304 kernel: Key type .fscrypt registered Mar 17 17:49:45.149323 kernel: Key type fscrypt-provisioning registered Mar 17 17:49:45.149341 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:49:45.149360 kernel: ima: No architecture policies found Mar 17 17:49:45.149378 kernel: clk: Disabling unused clocks Mar 17 17:49:45.149396 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:49:45.149415 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:49:45.149444 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:49:45.149462 kernel: Run /init as init process Mar 17 17:49:45.149481 kernel: with arguments: Mar 17 17:49:45.149498 kernel: /init Mar 17 17:49:45.149516 kernel: with environment: Mar 17 17:49:45.149534 kernel: HOME=/ Mar 17 17:49:45.149552 kernel: TERM=linux Mar 17 17:49:45.149570 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:49:45.149588 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:49:45.149612 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:49:45.149636 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:49:45.149657 systemd[1]: Detected virtualization google. Mar 17 17:49:45.149676 systemd[1]: Detected architecture x86-64. Mar 17 17:49:45.149694 systemd[1]: Running in initrd. Mar 17 17:49:45.149713 systemd[1]: No hostname configured, using default hostname. Mar 17 17:49:45.149733 systemd[1]: Hostname set to . Mar 17 17:49:45.149756 systemd[1]: Initializing machine ID from random generator. Mar 17 17:49:45.149775 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:49:45.149794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:45.149814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:45.149834 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:49:45.149854 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:49:45.149874 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:49:45.150557 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:49:45.150594 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:49:45.150622 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:49:45.150643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:45.150663 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:45.150683 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:49:45.150707 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:49:45.150727 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:49:45.150747 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:49:45.150767 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:49:45.150787 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:49:45.150808 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:49:45.150828 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:49:45.150848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:45.150871 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:45.150907 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:45.150927 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:49:45.150947 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:49:45.150967 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:49:45.150987 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:49:45.151007 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:49:45.151027 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:49:45.151047 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:49:45.151072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:45.151092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:49:45.151113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:45.151137 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:49:45.151205 systemd-journald[184]: Collecting audit messages is disabled. Mar 17 17:49:45.151250 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:49:45.151270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:45.151291 systemd-journald[184]: Journal started Mar 17 17:49:45.151335 systemd-journald[184]: Runtime Journal (/run/log/journal/4b3c9a0a7c9447b79b2035a750cec9ed) is 8M, max 148.6M, 140.6M free. Mar 17 17:49:45.119897 systemd-modules-load[185]: Inserted module 'overlay' Mar 17 17:49:45.162934 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:45.167905 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:49:45.169086 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:45.175021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:49:45.176755 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 17 17:49:45.191057 kernel: Bridge firewalling registered Mar 17 17:49:45.181336 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:45.193132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:49:45.199996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:49:45.218059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:49:45.223269 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:45.231681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:45.236839 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:45.243298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:45.256073 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:49:45.269462 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:49:45.277057 dracut-cmdline[217]: dracut-dracut-053 Mar 17 17:49:45.280798 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:49:45.330994 systemd-resolved[223]: Positive Trust Anchors: Mar 17 17:49:45.331015 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:49:45.331090 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:49:45.335819 systemd-resolved[223]: Defaulting to hostname 'linux'. Mar 17 17:49:45.340525 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:49:45.355114 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:45.389919 kernel: SCSI subsystem initialized Mar 17 17:49:45.400924 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:49:45.412914 kernel: iscsi: registered transport (tcp) Mar 17 17:49:45.436434 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:49:45.436513 kernel: QLogic iSCSI HBA Driver Mar 17 17:49:45.487589 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:49:45.492128 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:49:45.533398 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:49:45.533483 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:49:45.533512 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:49:45.579928 kernel: raid6: avx2x4 gen() 18301 MB/s Mar 17 17:49:45.596913 kernel: raid6: avx2x2 gen() 18338 MB/s Mar 17 17:49:45.614475 kernel: raid6: avx2x1 gen() 14371 MB/s Mar 17 17:49:45.614527 kernel: raid6: using algorithm avx2x2 gen() 18338 MB/s Mar 17 17:49:45.632439 kernel: raid6: .... xor() 18626 MB/s, rmw enabled Mar 17 17:49:45.632497 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:49:45.655921 kernel: xor: automatically using best checksumming function avx Mar 17 17:49:45.819919 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:49:45.833628 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:49:45.847082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:45.865015 systemd-udevd[402]: Using default interface naming scheme 'v255'. Mar 17 17:49:45.872927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:45.884268 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:49:45.911817 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Mar 17 17:49:45.949254 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:49:45.964128 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:49:46.047034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:46.067411 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:49:46.119013 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:49:46.140210 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:49:46.207171 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:49:46.208716 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 17 17:49:46.211279 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:49:46.173143 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:46.250387 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:49:46.250429 kernel: AES CTR mode by8 optimization enabled Mar 17 17:49:46.184005 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:49:46.201352 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:49:46.289324 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:49:46.289509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:46.330647 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Mar 17 17:49:46.399065 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 17 17:49:46.399342 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 17 17:49:46.399577 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 17 17:49:46.399797 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:49:46.400048 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:49:46.400074 kernel: GPT:17805311 != 25165823 Mar 17 17:49:46.400114 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:49:46.400137 kernel: GPT:17805311 != 25165823 Mar 17 17:49:46.400160 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:49:46.400182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:49:46.400947 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 17 17:49:46.353233 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:46.398400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:46.398685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:46.462939 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (467) Mar 17 17:49:46.410769 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:46.430738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:46.462753 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:49:46.489907 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (455) Mar 17 17:49:46.522732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:46.547366 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 17 17:49:46.561137 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 17 17:49:46.589896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 17 17:49:46.600529 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 17 17:49:46.609178 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 17 17:49:46.638089 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:49:46.671975 disk-uuid[543]: Primary Header is updated. Mar 17 17:49:46.671975 disk-uuid[543]: Secondary Entries is updated. Mar 17 17:49:46.671975 disk-uuid[543]: Secondary Header is updated. Mar 17 17:49:46.711013 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:49:46.681077 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:49:46.749913 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:47.720578 disk-uuid[544]: The operation has completed successfully. Mar 17 17:49:47.730069 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:49:47.805897 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:49:47.806057 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:49:47.862127 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:49:47.890101 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:49:47.890391 sh[567]: Success Mar 17 17:49:47.981227 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:49:47.987834 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:49:48.015450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:49:48.059220 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:49:48.059312 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:48.059338 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:49:48.068660 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:49:48.081297 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:49:48.107958 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:49:48.113312 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:49:48.114263 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:49:48.120073 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:49:48.185044 kernel: BTRFS info (device sda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:49:48.185087 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:48.185111 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:49:48.165973 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:49:48.210170 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:49:48.210226 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:49:48.226933 kernel: BTRFS info (device sda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:49:48.243630 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:49:48.260140 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:49:48.330767 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:49:48.359110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:49:48.460829 systemd-networkd[751]: lo: Link UP Mar 17 17:49:48.460841 systemd-networkd[751]: lo: Gained carrier Mar 17 17:49:48.463546 systemd-networkd[751]: Enumeration completed Mar 17 17:49:48.468427 ignition[680]: Ignition 2.20.0 Mar 17 17:49:48.463686 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:49:48.468435 ignition[680]: Stage: fetch-offline Mar 17 17:49:48.464347 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:48.468475 ignition[680]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:48.464358 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:49:48.468486 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 17:49:48.466800 systemd-networkd[751]: eth0: Link UP Mar 17 17:49:48.468605 ignition[680]: parsed url from cmdline: "" Mar 17 17:49:48.466807 systemd-networkd[751]: eth0: Gained carrier Mar 17 17:49:48.468610 ignition[680]: no config URL provided Mar 17 17:49:48.466821 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:48.468616 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:49:48.468329 systemd[1]: Reached target network.target - Network. Mar 17 17:49:48.468626 ignition[680]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:49:48.475954 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.84/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 17 17:49:48.468633 ignition[680]: failed to fetch config: resource requires networking Mar 17 17:49:48.491324 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:49:48.468829 ignition[680]: Ignition finished successfully Mar 17 17:49:48.516116 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:49:48.547376 ignition[762]: Ignition 2.20.0 Mar 17 17:49:48.560044 unknown[762]: fetched base config from "system" Mar 17 17:49:48.547388 ignition[762]: Stage: fetch Mar 17 17:49:48.560055 unknown[762]: fetched base config from "system" Mar 17 17:49:48.547654 ignition[762]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:48.560064 unknown[762]: fetched user config from "gcp" Mar 17 17:49:48.547674 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 17:49:48.579679 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:49:48.547826 ignition[762]: parsed url from cmdline: "" Mar 17 17:49:48.596198 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:49:48.547833 ignition[762]: no config URL provided Mar 17 17:49:48.633568 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:49:48.547845 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:49:48.659101 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:49:48.547861 ignition[762]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:49:48.682606 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:49:48.547941 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 17 17:49:48.697334 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:49:48.553181 ignition[762]: GET result: OK Mar 17 17:49:48.721191 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:49:48.553293 ignition[762]: parsing config with SHA512: f2604b6b128c9e958d21850a3d1220dc28e6fd371b4b473aea013b18227d80b9aba4d86a13d705888a8cec2a4e3adfb358b7c306c09360772d05ef11ba9efde0 Mar 17 17:49:48.743220 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:49:48.560731 ignition[762]: fetch: fetch complete Mar 17 17:49:48.753253 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:49:48.560737 ignition[762]: fetch: fetch passed Mar 17 17:49:48.778053 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:49:48.560791 ignition[762]: Ignition finished successfully Mar 17 17:49:48.802113 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:49:48.631088 ignition[769]: Ignition 2.20.0 Mar 17 17:49:48.631098 ignition[769]: Stage: kargs Mar 17 17:49:48.631307 ignition[769]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:48.631324 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 17:49:48.632365 ignition[769]: kargs: kargs passed Mar 17 17:49:48.632418 ignition[769]: Ignition finished successfully Mar 17 17:49:48.680186 ignition[774]: Ignition 2.20.0 Mar 17 17:49:48.680196 ignition[774]: Stage: disks Mar 17 17:49:48.680453 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:48.680467 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 17:49:48.681409 ignition[774]: disks: disks passed Mar 17 17:49:48.681461 ignition[774]: Ignition finished successfully Mar 17 17:49:48.847376 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:49:49.054555 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:49:49.072009 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:49:49.201927 kernel: EXT4-fs (sda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:49:49.202505 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:49:49.203325 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:49:49.224147 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:49:49.241005 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:49:49.267600 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:49:49.323207 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (791) Mar 17 17:49:49.323249 kernel: BTRFS info (device sda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:49:49.323266 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:49.323280 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:49:49.267689 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:49:49.363072 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:49:49.363113 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:49:49.267736 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:49:49.278214 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:49:49.345712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:49:49.376119 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:49:49.506838 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:49:49.517523 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:49:49.525973 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:49:49.538060 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:49:49.677839 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:49:49.683031 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:49:49.711230 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:49:49.735042 kernel: BTRFS info (device sda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:49:49.745165 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:49:49.776841 ignition[903]: INFO : Ignition 2.20.0 Mar 17 17:49:49.784062 ignition[903]: INFO : Stage: mount Mar 17 17:49:49.784062 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:49.784062 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 17:49:49.784062 ignition[903]: INFO : mount: mount passed Mar 17 17:49:49.784062 ignition[903]: INFO : Ignition finished successfully Mar 17 17:49:49.777563 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:49:49.784524 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:49:49.813026 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:49:49.835115 systemd-networkd[751]: eth0: Gained IPv6LL Mar 17 17:49:50.208272 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:49:50.245932 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (915) Mar 17 17:49:50.263920 kernel: BTRFS info (device sda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:49:50.264013 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:50.264039 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:49:50.286403 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:49:50.286486 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:49:50.289747 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:49:50.327463 ignition[932]: INFO : Ignition 2.20.0 Mar 17 17:49:50.327463 ignition[932]: INFO : Stage: files Mar 17 17:49:50.342032 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:50.342032 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 17:49:50.342032 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:49:50.342032 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:49:50.342032 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:49:50.342032 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:49:50.342032 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:49:50.342032 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:49:50.337806 unknown[932]: wrote ssh authorized keys file for user: core Mar 17 17:49:50.441022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:49:50.441022 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:49:51.619459 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:49:52.322486 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:49:52.340131 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:49:52.340131 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 17:49:52.651753 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:49:52.780311 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:49:52.796144 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:49:53.035117 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:49:53.293747 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:49:53.293747 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:49:53.333056 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:49:53.333056 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:49:53.333056 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:49:53.333056 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:49:53.333056 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:49:53.333056 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:49:53.333056 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:49:53.333056 ignition[932]: INFO : files: files passed Mar 17 17:49:53.333056 ignition[932]: INFO : Ignition finished successfully Mar 17 17:49:53.298926 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:49:53.329299 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:49:53.334199 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:49:53.385585 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:49:53.544122 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:53.544122 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:53.385748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:49:53.612093 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:53.405195 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:49:53.417197 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:49:53.445101 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:49:53.538539 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:49:53.538675 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:49:53.555390 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:49:53.569286 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:49:53.602250 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:49:53.609154 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:49:53.660841 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:49:53.687117 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:49:53.722983 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:53.743239 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:53.768363 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:49:53.787253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:49:53.787460 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:49:53.820317 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:49:53.840242 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:49:53.858312 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:49:53.876242 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:49:53.895276 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:49:53.917251 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:49:53.937347 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:49:53.956283 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:49:53.976253 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:49:53.996275 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:49:54.014158 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:49:54.014397 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:49:54.045345 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:54.065285 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:54.086174 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:49:54.086370 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:54.105230 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:49:54.105433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:49:54.224072 ignition[985]: INFO : Ignition 2.20.0 Mar 17 17:49:54.224072 ignition[985]: INFO : Stage: umount Mar 17 17:49:54.224072 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:54.224072 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 17 17:49:54.224072 ignition[985]: INFO : umount: umount passed Mar 17 17:49:54.224072 ignition[985]: INFO : Ignition finished successfully Mar 17 17:49:54.133299 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:49:54.133543 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:49:54.154301 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:49:54.154499 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:49:54.182152 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:49:54.220150 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:49:54.255056 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:49:54.255338 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:54.277302 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:49:54.277496 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:49:54.306854 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:49:54.308282 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:49:54.308402 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:49:54.324760 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:49:54.324902 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:49:54.349560 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:49:54.349693 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:49:54.367547 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:49:54.367615 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:49:54.373290 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:49:54.373358 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:49:54.390333 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:49:54.390405 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:49:54.407318 systemd[1]: Stopped target network.target - Network. Mar 17 17:49:54.424268 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:49:54.424363 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:49:54.452167 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:49:54.470080 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:49:54.474015 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:54.491090 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:49:54.506053 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:49:54.521130 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:49:54.521215 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:49:54.539146 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:49:54.539226 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:49:54.557151 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:49:54.557277 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:49:54.575150 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:49:54.575244 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:49:54.593137 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:49:54.593232 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:49:54.614347 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:49:54.623342 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:49:54.650623 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:49:54.650753 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:49:54.662423 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:49:54.662685 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:49:54.662821 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:49:54.677831 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:49:54.679388 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:49:54.679479 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:54.712028 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:49:54.721212 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:49:54.721299 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:49:54.738354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:49:54.738430 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:54.768334 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:49:54.768404 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:54.786224 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:49:55.265048 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 17 17:49:54.786314 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:54.796424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:54.831443 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:49:54.831555 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:49:54.832140 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:49:54.832303 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:54.856158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:49:54.856230 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:54.865343 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:49:54.865396 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:54.892247 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:49:54.892348 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:49:54.920348 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:49:54.920439 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:49:54.963058 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:49:54.963286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:55.018084 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:49:55.030210 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:49:55.030296 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:55.061317 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:49:55.061390 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:55.080213 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:49:55.080294 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:55.101245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:55.101337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:55.113615 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:49:55.113715 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:49:55.114253 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:49:55.114368 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:49:55.130585 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:49:55.130726 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:49:55.160360 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:49:55.187096 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:49:55.215408 systemd[1]: Switching root. Mar 17 17:49:55.615050 systemd-journald[184]: Journal stopped Mar 17 17:49:58.154634 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:49:58.154700 kernel: SELinux: policy capability open_perms=1 Mar 17 17:49:58.154724 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:49:58.154743 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:49:58.154761 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:49:58.154779 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:49:58.154801 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:49:58.154822 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:49:58.154845 kernel: audit: type=1403 audit(1742233795.924:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:49:58.154867 systemd[1]: Successfully loaded SELinux policy in 89.992ms. Mar 17 17:49:58.154907 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.150ms. Mar 17 17:49:58.154931 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:49:58.154952 systemd[1]: Detected virtualization google. Mar 17 17:49:58.154973 systemd[1]: Detected architecture x86-64. Mar 17 17:49:58.155001 systemd[1]: Detected first boot. Mar 17 17:49:58.155024 systemd[1]: Initializing machine ID from random generator. Mar 17 17:49:58.155046 zram_generator::config[1028]: No configuration found. Mar 17 17:49:58.155067 kernel: Guest personality initialized and is inactive Mar 17 17:49:58.155086 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:49:58.155112 kernel: Initialized host personality Mar 17 17:49:58.155132 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:49:58.155152 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:49:58.155175 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:49:58.155197 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:49:58.155218 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:49:58.155239 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:49:58.155261 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:49:58.155283 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:49:58.155320 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:49:58.155342 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:49:58.155366 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:49:58.155387 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:49:58.155410 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:49:58.155432 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:49:58.155454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:58.155481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:58.155512 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:49:58.155534 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:49:58.155557 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:49:58.155580 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:49:58.155608 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:49:58.155631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:58.155655 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:49:58.155682 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:49:58.155706 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:49:58.155728 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:49:58.155752 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:58.155775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:49:58.155797 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:49:58.155821 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:49:58.155846 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:49:58.155873 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:49:58.155908 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:49:58.155931 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:58.155954 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:58.155983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:58.156006 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:49:58.156031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:49:58.156054 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:49:58.156076 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:49:58.156100 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:58.156123 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:49:58.156145 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:49:58.156173 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:49:58.156197 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:49:58.156220 systemd[1]: Reached target machines.target - Containers. Mar 17 17:49:58.156244 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:49:58.156267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:58.156289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:49:58.156313 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:49:58.156337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:58.156361 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:49:58.156389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:58.156412 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:49:58.156434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:58.156457 kernel: ACPI: bus type drm_connector registered Mar 17 17:49:58.156480 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:49:58.156512 kernel: fuse: init (API version 7.39) Mar 17 17:49:58.156532 kernel: loop: module loaded Mar 17 17:49:58.156556 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:49:58.156578 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:49:58.156599 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:49:58.156620 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:49:58.156644 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:49:58.156666 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:49:58.156687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:49:58.156746 systemd-journald[1116]: Collecting audit messages is disabled. Mar 17 17:49:58.156795 systemd-journald[1116]: Journal started Mar 17 17:49:58.156838 systemd-journald[1116]: Runtime Journal (/run/log/journal/af7a7c693abd41c4a9ad732dfca0f52a) is 8M, max 148.6M, 140.6M free. Mar 17 17:49:56.904478 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:49:56.916614 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:49:56.917247 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:49:58.171929 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:49:58.196915 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:49:58.209933 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:49:58.245905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:49:58.262927 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:49:58.270915 systemd[1]: Stopped verity-setup.service. Mar 17 17:49:58.298914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:58.309921 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:49:58.321544 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:49:58.333343 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:49:58.343377 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:49:58.353349 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:49:58.364213 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:49:58.374232 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:49:58.384450 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:49:58.396438 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:58.408367 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:49:58.408670 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:49:58.420452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:58.420752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:58.432406 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:49:58.432698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:49:58.443431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:58.443706 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:58.455456 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:49:58.455767 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:49:58.466436 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:58.466724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:58.477536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:58.487536 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:49:58.499550 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:49:58.511441 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:49:58.523438 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:58.547409 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:49:58.563019 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:49:58.587073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:49:58.597056 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:49:58.597122 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:49:58.608345 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:49:58.632126 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:49:58.654213 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:49:58.665240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:58.671132 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:49:58.693239 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:49:58.702476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:58.707704 systemd-journald[1116]: Time spent on flushing to /var/log/journal/af7a7c693abd41c4a9ad732dfca0f52a is 122.829ms for 946 entries. Mar 17 17:49:58.707704 systemd-journald[1116]: System Journal (/var/log/journal/af7a7c693abd41c4a9ad732dfca0f52a) is 8M, max 584.8M, 576.8M free. Mar 17 17:49:58.870100 systemd-journald[1116]: Received client request to flush runtime journal. Mar 17 17:49:58.870172 kernel: loop0: detected capacity change from 0 to 52152 Mar 17 17:49:58.716169 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:49:58.724586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:58.735172 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:49:58.753126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:49:58.775120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:49:58.796678 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:49:58.814781 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:49:58.832294 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:49:58.843465 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:49:58.858181 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:49:58.869601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:58.880865 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:49:58.887285 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Mar 17 17:49:58.887320 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Mar 17 17:49:58.905716 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:58.917972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:49:58.932780 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:49:58.959538 kernel: loop1: detected capacity change from 0 to 147912 Mar 17 17:49:58.961269 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:49:58.983849 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:49:58.996019 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:49:59.000764 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:49:59.015716 udevadm[1155]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:49:59.063959 kernel: loop2: detected capacity change from 0 to 210664 Mar 17 17:49:59.099826 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:49:59.116359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:49:59.173164 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Mar 17 17:49:59.174333 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Mar 17 17:49:59.187941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:59.204008 kernel: loop3: detected capacity change from 0 to 138176 Mar 17 17:49:59.303034 kernel: loop4: detected capacity change from 0 to 52152 Mar 17 17:49:59.346218 kernel: loop5: detected capacity change from 0 to 147912 Mar 17 17:49:59.407424 kernel: loop6: detected capacity change from 0 to 210664 Mar 17 17:49:59.452929 kernel: loop7: detected capacity change from 0 to 138176 Mar 17 17:49:59.504720 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Mar 17 17:49:59.505739 (sd-merge)[1180]: Merged extensions into '/usr'. Mar 17 17:49:59.519823 systemd[1]: Reload requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:49:59.519843 systemd[1]: Reloading... Mar 17 17:49:59.657724 zram_generator::config[1205]: No configuration found. Mar 17 17:49:59.821964 ldconfig[1147]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:49:59.966918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:50:00.109960 systemd[1]: Reloading finished in 588 ms. Mar 17 17:50:00.132152 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:50:00.142750 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:50:00.174168 systemd[1]: Starting ensure-sysext.service... Mar 17 17:50:00.185132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:50:00.209205 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:50:00.225636 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:50:00.226149 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:50:00.228574 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:50:00.229283 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Mar 17 17:50:00.229556 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Mar 17 17:50:00.235022 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:50:00.238686 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:50:00.238823 systemd-tmpfiles[1249]: Skipping /boot Mar 17 17:50:00.247546 systemd[1]: Reload requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:50:00.247574 systemd[1]: Reloading... Mar 17 17:50:00.258158 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:50:00.258373 systemd-tmpfiles[1249]: Skipping /boot Mar 17 17:50:00.332727 systemd-udevd[1252]: Using default interface naming scheme 'v255'. Mar 17 17:50:00.397920 zram_generator::config[1279]: No configuration found. Mar 17 17:50:00.625987 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1301) Mar 17 17:50:00.775189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:50:00.788905 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 17:50:00.826907 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 17 17:50:00.968117 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 17:50:00.968161 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:50:00.968198 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:50:00.968234 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Mar 17 17:50:00.975520 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:50:00.990452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 17 17:50:00.992925 kernel: ACPI: button: Sleep Button [SLPF] Mar 17 17:50:01.004377 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:50:01.004784 systemd[1]: Reloading finished in 756 ms. Mar 17 17:50:01.014470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:50:01.047187 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:50:01.078223 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:50:01.101870 systemd[1]: Finished ensure-sysext.service. Mar 17 17:50:01.138713 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Mar 17 17:50:01.149110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:50:01.161166 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:50:01.180105 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:50:01.189325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:50:01.198706 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:50:01.219138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:50:01.221034 lvm[1367]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:50:01.236147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:50:01.253358 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:50:01.263688 augenrules[1380]: No rules Mar 17 17:50:01.270698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:50:01.287313 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:50:01.296269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:50:01.303376 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:50:01.315058 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:50:01.322127 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:50:01.341479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:50:01.362723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:50:01.373063 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:50:01.393199 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:50:01.416145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:50:01.426039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:50:01.431314 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:50:01.431682 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:50:01.432463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:50:01.432927 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:50:01.433427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:50:01.433660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:50:01.434195 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:50:01.434467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:50:01.435064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:50:01.435320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:50:01.435821 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:50:01.436194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:50:01.440950 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:50:01.441829 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:50:01.452045 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:50:01.458506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:50:01.464169 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:50:01.467255 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Mar 17 17:50:01.467356 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:50:01.467459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:50:01.471711 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:50:01.477629 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:50:01.477713 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:50:01.479103 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:50:01.485984 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:50:01.523547 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:50:01.547943 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:50:01.575265 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Mar 17 17:50:01.601454 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:50:01.611693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:50:01.709565 systemd-networkd[1390]: lo: Link UP Mar 17 17:50:01.709578 systemd-networkd[1390]: lo: Gained carrier Mar 17 17:50:01.712489 systemd-networkd[1390]: Enumeration completed Mar 17 17:50:01.714031 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:50:01.715379 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:50:01.715393 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:50:01.716449 systemd-networkd[1390]: eth0: Link UP Mar 17 17:50:01.716568 systemd-networkd[1390]: eth0: Gained carrier Mar 17 17:50:01.716601 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:50:01.724989 systemd-resolved[1391]: Positive Trust Anchors: Mar 17 17:50:01.725009 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:50:01.725096 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:50:01.726595 systemd-networkd[1390]: eth0: DHCPv4 address 10.128.0.84/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 17 17:50:01.732719 systemd-resolved[1391]: Defaulting to hostname 'linux'. Mar 17 17:50:01.735199 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:50:01.758243 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:50:01.770221 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:50:01.780447 systemd[1]: Reached target network.target - Network. Mar 17 17:50:01.789086 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:50:01.800049 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:50:01.810238 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:50:01.821094 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:50:01.833316 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:50:01.843256 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:50:01.854058 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:50:01.865063 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:50:01.865121 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:50:01.874053 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:50:01.883453 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:50:01.894831 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:50:01.905474 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:50:01.917243 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:50:01.929128 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:50:01.949790 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:50:01.961668 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:50:01.974346 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:50:01.986311 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:50:01.996892 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:50:02.007034 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:50:02.015112 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:50:02.015162 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:50:02.027076 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:50:02.042136 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:50:02.063049 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:50:02.081811 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:50:02.102484 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:50:02.112033 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:50:02.121453 jq[1444]: false Mar 17 17:50:02.125155 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:50:02.145965 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:50:02.162027 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:50:02.168038 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:50:02.173514 coreos-metadata[1442]: Mar 17 17:50:02.173 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Mar 17 17:50:02.176618 extend-filesystems[1445]: Found loop4 Mar 17 17:50:02.176618 extend-filesystems[1445]: Found loop5 Mar 17 17:50:02.176618 extend-filesystems[1445]: Found loop6 Mar 17 17:50:02.176618 extend-filesystems[1445]: Found loop7 Mar 17 17:50:02.176618 extend-filesystems[1445]: Found sda Mar 17 17:50:02.176618 extend-filesystems[1445]: Found sda1 Mar 17 17:50:02.176618 extend-filesystems[1445]: Found sda2 Mar 17 17:50:02.176618 extend-filesystems[1445]: Found sda3 Mar 17 17:50:02.298181 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Mar 17 17:50:02.298240 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Mar 17 17:50:02.298281 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1293) Mar 17 17:50:02.229629 dbus-daemon[1443]: [system] SELinux support is enabled Mar 17 17:50:02.298726 extend-filesystems[1445]: Found usr Mar 17 17:50:02.298726 extend-filesystems[1445]: Found sda4 Mar 17 17:50:02.298726 extend-filesystems[1445]: Found sda6 Mar 17 17:50:02.298726 extend-filesystems[1445]: Found sda7 Mar 17 17:50:02.298726 extend-filesystems[1445]: Found sda9 Mar 17 17:50:02.298726 extend-filesystems[1445]: Checking size of /dev/sda9 Mar 17 17:50:02.298726 extend-filesystems[1445]: Resized partition /dev/sda9 Mar 17 17:50:02.185134 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:50:02.410982 coreos-metadata[1442]: Mar 17 17:50:02.177 INFO Fetch successful Mar 17 17:50:02.410982 coreos-metadata[1442]: Mar 17 17:50:02.177 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Mar 17 17:50:02.410982 coreos-metadata[1442]: Mar 17 17:50:02.182 INFO Fetch successful Mar 17 17:50:02.410982 coreos-metadata[1442]: Mar 17 17:50:02.182 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Mar 17 17:50:02.410982 coreos-metadata[1442]: Mar 17 17:50:02.184 INFO Fetch successful Mar 17 17:50:02.410982 coreos-metadata[1442]: Mar 17 17:50:02.184 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Mar 17 17:50:02.410982 coreos-metadata[1442]: Mar 17 17:50:02.185 INFO Fetch successful Mar 17 17:50:02.232849 dbus-daemon[1443]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1390 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:20 UTC 2025 (1): Starting Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: ---------------------------------------------------- Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: corporation. Support and training for ntp-4 are Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: available at https://www.nwtime.org/support Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: ---------------------------------------------------- Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: proto: precision = 0.108 usec (-23) Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: basedate set to 2025-03-05 Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: gps base set to 2025-03-09 (week 2357) Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: Listen normally on 3 eth0 10.128.0.84:123 Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: Listen normally on 4 lo [::1]:123 Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: bind(21) AF_INET6 fe80::4001:aff:fe80:54%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:54%2#123 Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: failed to init interface for address fe80::4001:aff:fe80:54%2 Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: Listening on routing socket on fd #21 for interface updates Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:50:02.411614 ntpd[1450]: 17 Mar 17:50:02 ntpd[1450]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:50:02.414042 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:50:02.414042 extend-filesystems[1465]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:50:02.414042 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 17 17:50:02.414042 extend-filesystems[1465]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Mar 17 17:50:02.252719 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:50:02.306746 ntpd[1450]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:20 UTC 2025 (1): Starting Mar 17 17:50:02.475032 extend-filesystems[1445]: Resized filesystem in /dev/sda9 Mar 17 17:50:02.273610 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 17 17:50:02.306781 ntpd[1450]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:50:02.274550 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:50:02.483832 update_engine[1470]: I20250317 17:50:02.325209 1470 main.cc:92] Flatcar Update Engine starting Mar 17 17:50:02.483832 update_engine[1470]: I20250317 17:50:02.327353 1470 update_check_scheduler.cc:74] Next update check in 2m7s Mar 17 17:50:02.306797 ntpd[1450]: ---------------------------------------------------- Mar 17 17:50:02.286437 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:50:02.306812 ntpd[1450]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:50:02.309062 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:50:02.490634 jq[1474]: true Mar 17 17:50:02.306826 ntpd[1450]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:50:02.323443 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:50:02.306839 ntpd[1450]: corporation. Support and training for ntp-4 are Mar 17 17:50:02.353674 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:50:02.306855 ntpd[1450]: available at https://www.nwtime.org/support Mar 17 17:50:02.355139 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:50:02.306869 ntpd[1450]: ---------------------------------------------------- Mar 17 17:50:02.356159 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:50:02.319301 ntpd[1450]: proto: precision = 0.108 usec (-23) Mar 17 17:50:02.356970 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:50:02.325185 ntpd[1450]: basedate set to 2025-03-05 Mar 17 17:50:02.378701 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:50:02.325213 ntpd[1450]: gps base set to 2025-03-09 (week 2357) Mar 17 17:50:02.380151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:50:02.337409 ntpd[1450]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:50:02.420425 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:50:02.337484 ntpd[1450]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:50:02.421984 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:50:02.337741 ntpd[1450]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:50:02.476080 systemd-logind[1468]: Watching system buttons on /dev/input/event2 (Power Button) Mar 17 17:50:02.337809 ntpd[1450]: Listen normally on 3 eth0 10.128.0.84:123 Mar 17 17:50:02.476112 systemd-logind[1468]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 17 17:50:02.337869 ntpd[1450]: Listen normally on 4 lo [::1]:123 Mar 17 17:50:02.476143 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:50:02.344330 ntpd[1450]: bind(21) AF_INET6 fe80::4001:aff:fe80:54%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:50:02.477131 systemd-logind[1468]: New seat seat0. Mar 17 17:50:02.344371 ntpd[1450]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:54%2#123 Mar 17 17:50:02.485161 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:50:02.344392 ntpd[1450]: failed to init interface for address fe80::4001:aff:fe80:54%2 Mar 17 17:50:02.508339 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:50:02.344446 ntpd[1450]: Listening on routing socket on fd #21 for interface updates Mar 17 17:50:02.361665 ntpd[1450]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:50:02.361711 ntpd[1450]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:50:02.515142 jq[1480]: true Mar 17 17:50:02.513427 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:50:02.535567 dbus-daemon[1443]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:50:02.583916 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:50:02.606234 tar[1479]: linux-amd64/helm Mar 17 17:50:02.624791 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:50:02.636471 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:50:02.638351 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:50:02.639387 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:50:02.665665 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:50:02.677067 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:50:02.677360 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:50:02.696278 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:50:02.701979 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:50:02.719962 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:50:02.743305 systemd[1]: Starting sshkeys.service... Mar 17 17:50:02.824695 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:50:02.845405 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:50:02.943268 coreos-metadata[1516]: Mar 17 17:50:02.943 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 17 17:50:02.946791 coreos-metadata[1516]: Mar 17 17:50:02.946 INFO Fetch failed with 404: resource not found Mar 17 17:50:02.948966 coreos-metadata[1516]: Mar 17 17:50:02.946 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 17 17:50:02.951669 coreos-metadata[1516]: Mar 17 17:50:02.951 INFO Fetch successful Mar 17 17:50:02.951669 coreos-metadata[1516]: Mar 17 17:50:02.951 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 17 17:50:02.957297 coreos-metadata[1516]: Mar 17 17:50:02.956 INFO Fetch failed with 404: resource not found Mar 17 17:50:02.957297 coreos-metadata[1516]: Mar 17 17:50:02.956 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 17 17:50:02.958574 coreos-metadata[1516]: Mar 17 17:50:02.958 INFO Fetch failed with 404: resource not found Mar 17 17:50:02.958574 coreos-metadata[1516]: Mar 17 17:50:02.958 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 17 17:50:02.959755 coreos-metadata[1516]: Mar 17 17:50:02.959 INFO Fetch successful Mar 17 17:50:02.966955 unknown[1516]: wrote ssh authorized keys file for user: core Mar 17 17:50:03.028965 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:50:03.036517 dbus-daemon[1443]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:50:03.038526 dbus-daemon[1443]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1511 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:50:03.055662 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:50:03.057354 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:50:03.068583 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:50:03.088368 systemd[1]: Finished sshkeys.service. Mar 17 17:50:03.198623 polkitd[1526]: Started polkitd version 121 Mar 17 17:50:03.229587 polkitd[1526]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:50:03.229909 polkitd[1526]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:50:03.232280 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:50:03.232907 polkitd[1526]: Finished loading, compiling and executing 2 rules Mar 17 17:50:03.233709 dbus-daemon[1443]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:50:03.233992 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:50:03.237097 polkitd[1526]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:50:03.278954 systemd-hostnamed[1511]: Hostname set to (transient) Mar 17 17:50:03.280486 systemd-resolved[1391]: System hostname changed to 'ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal'. Mar 17 17:50:03.307398 ntpd[1450]: bind(24) AF_INET6 fe80::4001:aff:fe80:54%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:50:03.308066 ntpd[1450]: 17 Mar 17:50:03 ntpd[1450]: bind(24) AF_INET6 fe80::4001:aff:fe80:54%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:50:03.308066 ntpd[1450]: 17 Mar 17:50:03 ntpd[1450]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:54%2#123 Mar 17 17:50:03.308066 ntpd[1450]: 17 Mar 17:50:03 ntpd[1450]: failed to init interface for address fe80::4001:aff:fe80:54%2 Mar 17 17:50:03.307908 ntpd[1450]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:54%2#123 Mar 17 17:50:03.307930 ntpd[1450]: failed to init interface for address fe80::4001:aff:fe80:54%2 Mar 17 17:50:03.333917 containerd[1481]: time="2025-03-17T17:50:03.332309576Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:50:03.439159 containerd[1481]: time="2025-03-17T17:50:03.439064358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:50:03.442283 containerd[1481]: time="2025-03-17T17:50:03.441865909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:50:03.442473 containerd[1481]: time="2025-03-17T17:50:03.442451067Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:50:03.442588 containerd[1481]: time="2025-03-17T17:50:03.442571293Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:50:03.443499 containerd[1481]: time="2025-03-17T17:50:03.443383386Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:50:03.443499 containerd[1481]: time="2025-03-17T17:50:03.443419722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:50:03.444994 containerd[1481]: time="2025-03-17T17:50:03.444174532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:50:03.444994 containerd[1481]: time="2025-03-17T17:50:03.444299867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:50:03.445270 containerd[1481]: time="2025-03-17T17:50:03.445243249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:50:03.446127 containerd[1481]: time="2025-03-17T17:50:03.445926648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:50:03.446127 containerd[1481]: time="2025-03-17T17:50:03.445984960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:50:03.446127 containerd[1481]: time="2025-03-17T17:50:03.446004704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:50:03.448461 containerd[1481]: time="2025-03-17T17:50:03.446961863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:50:03.448461 containerd[1481]: time="2025-03-17T17:50:03.447298110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:50:03.448461 containerd[1481]: time="2025-03-17T17:50:03.447567483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:50:03.448461 containerd[1481]: time="2025-03-17T17:50:03.447591227Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:50:03.448461 containerd[1481]: time="2025-03-17T17:50:03.447692442Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:50:03.448461 containerd[1481]: time="2025-03-17T17:50:03.447752687Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:50:03.455099 containerd[1481]: time="2025-03-17T17:50:03.454984469Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:50:03.455278 containerd[1481]: time="2025-03-17T17:50:03.455256889Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:50:03.455946 containerd[1481]: time="2025-03-17T17:50:03.455922699Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:50:03.456065 containerd[1481]: time="2025-03-17T17:50:03.456047258Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:50:03.456909 containerd[1481]: time="2025-03-17T17:50:03.456147470Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:50:03.456909 containerd[1481]: time="2025-03-17T17:50:03.456352842Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:50:03.456909 containerd[1481]: time="2025-03-17T17:50:03.456692273Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:50:03.456909 containerd[1481]: time="2025-03-17T17:50:03.456831957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:50:03.456909 containerd[1481]: time="2025-03-17T17:50:03.456856810Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:50:03.458299 containerd[1481]: time="2025-03-17T17:50:03.458231804Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:50:03.458392 containerd[1481]: time="2025-03-17T17:50:03.458275774Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:50:03.458494 containerd[1481]: time="2025-03-17T17:50:03.458477583Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:50:03.458598 containerd[1481]: time="2025-03-17T17:50:03.458581267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:50:03.458700 containerd[1481]: time="2025-03-17T17:50:03.458684635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:50:03.458933 containerd[1481]: time="2025-03-17T17:50:03.458909551Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:50:03.459983 containerd[1481]: time="2025-03-17T17:50:03.459767428Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:50:03.459983 containerd[1481]: time="2025-03-17T17:50:03.459797468Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:50:03.459983 containerd[1481]: time="2025-03-17T17:50:03.459844004Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:50:03.459983 containerd[1481]: time="2025-03-17T17:50:03.459917270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.459983 containerd[1481]: time="2025-03-17T17:50:03.459943096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.460684 containerd[1481]: time="2025-03-17T17:50:03.460294531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.460684 containerd[1481]: time="2025-03-17T17:50:03.460330112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.460684 containerd[1481]: time="2025-03-17T17:50:03.460468287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.460684 containerd[1481]: time="2025-03-17T17:50:03.460493357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.460684 containerd[1481]: time="2025-03-17T17:50:03.460512660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.460684 containerd[1481]: time="2025-03-17T17:50:03.460552941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.460684 containerd[1481]: time="2025-03-17T17:50:03.460575710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.460684 containerd[1481]: time="2025-03-17T17:50:03.460600724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.461569 containerd[1481]: time="2025-03-17T17:50:03.461242688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.461569 containerd[1481]: time="2025-03-17T17:50:03.461271600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.461569 containerd[1481]: time="2025-03-17T17:50:03.461311728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.461569 containerd[1481]: time="2025-03-17T17:50:03.461336528Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:50:03.462400 containerd[1481]: time="2025-03-17T17:50:03.462002833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.462400 containerd[1481]: time="2025-03-17T17:50:03.462040260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.462400 containerd[1481]: time="2025-03-17T17:50:03.462306752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:50:03.463056 containerd[1481]: time="2025-03-17T17:50:03.463002370Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:50:03.463558 containerd[1481]: time="2025-03-17T17:50:03.463302907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:50:03.463558 containerd[1481]: time="2025-03-17T17:50:03.463336258Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:50:03.463841 containerd[1481]: time="2025-03-17T17:50:03.463361568Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:50:03.463841 containerd[1481]: time="2025-03-17T17:50:03.463727551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.463841 containerd[1481]: time="2025-03-17T17:50:03.463752117Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:50:03.464409 containerd[1481]: time="2025-03-17T17:50:03.463768404Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:50:03.464409 containerd[1481]: time="2025-03-17T17:50:03.464120234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:50:03.466344 containerd[1481]: time="2025-03-17T17:50:03.465912605Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:50:03.466344 containerd[1481]: time="2025-03-17T17:50:03.466248426Z" level=info msg="Connect containerd service" Mar 17 17:50:03.467273 containerd[1481]: time="2025-03-17T17:50:03.466781867Z" level=info msg="using legacy CRI server" Mar 17 17:50:03.467273 containerd[1481]: time="2025-03-17T17:50:03.467196271Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:50:03.468168 containerd[1481]: time="2025-03-17T17:50:03.467806818Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:50:03.471214 containerd[1481]: time="2025-03-17T17:50:03.470830592Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:50:03.473367 containerd[1481]: time="2025-03-17T17:50:03.471769766Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:50:03.473367 containerd[1481]: time="2025-03-17T17:50:03.471852002Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:50:03.473367 containerd[1481]: time="2025-03-17T17:50:03.471930131Z" level=info msg="Start subscribing containerd event" Mar 17 17:50:03.473367 containerd[1481]: time="2025-03-17T17:50:03.471977477Z" level=info msg="Start recovering state" Mar 17 17:50:03.473367 containerd[1481]: time="2025-03-17T17:50:03.472061129Z" level=info msg="Start event monitor" Mar 17 17:50:03.473367 containerd[1481]: time="2025-03-17T17:50:03.472078141Z" level=info msg="Start snapshots syncer" Mar 17 17:50:03.473367 containerd[1481]: time="2025-03-17T17:50:03.472093432Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:50:03.473367 containerd[1481]: time="2025-03-17T17:50:03.472105295Z" level=info msg="Start streaming server" Mar 17 17:50:03.472343 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:50:03.485016 containerd[1481]: time="2025-03-17T17:50:03.484969817Z" level=info msg="containerd successfully booted in 0.156429s" Mar 17 17:50:03.506361 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:50:03.551164 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:50:03.569800 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:50:03.588307 systemd[1]: Started sshd@0-10.128.0.84:22-139.178.89.65:39368.service - OpenSSH per-connection server daemon (139.178.89.65:39368). Mar 17 17:50:03.601313 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:50:03.602578 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:50:03.614437 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:50:03.660565 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:50:03.681382 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:50:03.687327 tar[1479]: linux-amd64/LICENSE Mar 17 17:50:03.687782 tar[1479]: linux-amd64/README.md Mar 17 17:50:03.702407 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:50:03.712400 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:50:03.735406 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:50:03.785091 systemd-networkd[1390]: eth0: Gained IPv6LL Mar 17 17:50:03.789043 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:50:03.800907 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:50:03.817198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:03.840644 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:50:03.855355 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Mar 17 17:50:03.884942 init.sh[1569]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 17 17:50:03.885923 init.sh[1569]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 17 17:50:03.885923 init.sh[1569]: + /usr/bin/google_instance_setup Mar 17 17:50:03.902253 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:50:03.992309 sshd[1554]: Accepted publickey for core from 139.178.89.65 port 39368 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:03.998191 sshd-session[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:04.012780 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:50:04.033937 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:50:04.066365 systemd-logind[1468]: New session 1 of user core. Mar 17 17:50:04.082538 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:50:04.105317 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:50:04.137615 (systemd)[1582]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:50:04.143375 systemd-logind[1468]: New session c1 of user core. Mar 17 17:50:04.438278 systemd[1582]: Queued start job for default target default.target. Mar 17 17:50:04.444442 systemd[1582]: Created slice app.slice - User Application Slice. Mar 17 17:50:04.444491 systemd[1582]: Reached target paths.target - Paths. Mar 17 17:50:04.445134 systemd[1582]: Reached target timers.target - Timers. Mar 17 17:50:04.449058 systemd[1582]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:50:04.473156 systemd[1582]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:50:04.473532 systemd[1582]: Reached target sockets.target - Sockets. Mar 17 17:50:04.473732 systemd[1582]: Reached target basic.target - Basic System. Mar 17 17:50:04.473984 systemd[1582]: Reached target default.target - Main User Target. Mar 17 17:50:04.474043 systemd[1582]: Startup finished in 316ms. Mar 17 17:50:04.474531 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:50:04.494621 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:50:04.572559 instance-setup[1574]: INFO Running google_set_multiqueue. Mar 17 17:50:04.593512 instance-setup[1574]: INFO Set channels for eth0 to 2. Mar 17 17:50:04.598230 instance-setup[1574]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 17 17:50:04.600185 instance-setup[1574]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 17 17:50:04.600895 instance-setup[1574]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 17 17:50:04.603757 instance-setup[1574]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 17 17:50:04.603847 instance-setup[1574]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 17 17:50:04.605623 instance-setup[1574]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 17 17:50:04.605909 instance-setup[1574]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 17 17:50:04.608025 instance-setup[1574]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 17 17:50:04.617137 instance-setup[1574]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 17 17:50:04.621642 instance-setup[1574]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 17 17:50:04.623694 instance-setup[1574]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 17 17:50:04.623920 instance-setup[1574]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 17 17:50:04.647908 init.sh[1569]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 17 17:50:04.752604 systemd[1]: Started sshd@1-10.128.0.84:22-139.178.89.65:45846.service - OpenSSH per-connection server daemon (139.178.89.65:45846). Mar 17 17:50:04.861584 startup-script[1622]: INFO Starting startup scripts. Mar 17 17:50:04.868669 startup-script[1622]: INFO No startup scripts found in metadata. Mar 17 17:50:04.868805 startup-script[1622]: INFO Finished running startup scripts. Mar 17 17:50:04.892761 init.sh[1569]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 17 17:50:04.894075 init.sh[1569]: + daemon_pids=() Mar 17 17:50:04.894075 init.sh[1569]: + for d in accounts clock_skew network Mar 17 17:50:04.894075 init.sh[1569]: + daemon_pids+=($!) Mar 17 17:50:04.894075 init.sh[1569]: + for d in accounts clock_skew network Mar 17 17:50:04.894238 init.sh[1628]: + /usr/bin/google_accounts_daemon Mar 17 17:50:04.894535 init.sh[1569]: + daemon_pids+=($!) Mar 17 17:50:04.894535 init.sh[1569]: + for d in accounts clock_skew network Mar 17 17:50:04.894535 init.sh[1569]: + daemon_pids+=($!) Mar 17 17:50:04.894661 init.sh[1569]: + NOTIFY_SOCKET=/run/systemd/notify Mar 17 17:50:04.894661 init.sh[1569]: + /usr/bin/systemd-notify --ready Mar 17 17:50:04.897916 init.sh[1630]: + /usr/bin/google_network_daemon Mar 17 17:50:04.898218 init.sh[1629]: + /usr/bin/google_clock_skew_daemon Mar 17 17:50:04.917517 systemd[1]: Started oem-gce.service - GCE Linux Agent. Mar 17 17:50:04.931159 init.sh[1569]: + wait -n 1628 1629 1630 Mar 17 17:50:05.104116 sshd[1624]: Accepted publickey for core from 139.178.89.65 port 45846 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:05.105690 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:05.123314 systemd-logind[1468]: New session 2 of user core. Mar 17 17:50:05.128148 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:50:05.269057 google-clock-skew[1629]: INFO Starting Google Clock Skew daemon. Mar 17 17:50:05.285923 google-clock-skew[1629]: INFO Clock drift token has changed: 0. Mar 17 17:50:05.338792 google-networking[1630]: INFO Starting Google Networking daemon. Mar 17 17:50:05.339256 sshd[1635]: Connection closed by 139.178.89.65 port 45846 Mar 17 17:50:05.341494 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:05.351751 systemd[1]: sshd@1-10.128.0.84:22-139.178.89.65:45846.service: Deactivated successfully. Mar 17 17:50:05.356439 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:50:05.358571 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:50:05.363462 systemd-logind[1468]: Removed session 2. Mar 17 17:50:05.365574 groupadd[1641]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 17 17:50:05.371088 groupadd[1641]: group added to /etc/gshadow: name=google-sudoers Mar 17 17:50:05.399637 systemd[1]: Started sshd@2-10.128.0.84:22-139.178.89.65:45860.service - OpenSSH per-connection server daemon (139.178.89.65:45860). Mar 17 17:50:05.441378 groupadd[1641]: new group: name=google-sudoers, GID=1000 Mar 17 17:50:05.471184 google-accounts[1628]: INFO Starting Google Accounts daemon. Mar 17 17:50:05.483651 google-accounts[1628]: WARNING OS Login not installed. Mar 17 17:50:05.485507 google-accounts[1628]: INFO Creating a new user account for 0. Mar 17 17:50:05.491948 init.sh[1656]: useradd: invalid user name '0': use --badname to ignore Mar 17 17:50:05.491542 google-accounts[1628]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 17 17:50:05.000884 systemd-resolved[1391]: Clock change detected. Flushing caches. Mar 17 17:50:05.018481 systemd-journald[1116]: Time jumped backwards, rotating. Mar 17 17:50:05.006596 google-clock-skew[1629]: INFO Synced system time with hardware clock. Mar 17 17:50:05.172596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:05.185703 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:50:05.193438 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:50:05.197170 systemd[1]: Startup finished in 1.014s (kernel) + 11.148s (initrd) + 9.850s (userspace) = 22.013s. Mar 17 17:50:05.208892 sshd[1650]: Accepted publickey for core from 139.178.89.65 port 45860 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:05.212912 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:05.237077 systemd-logind[1468]: New session 3 of user core. Mar 17 17:50:05.241056 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:50:05.425952 sshd[1669]: Connection closed by 139.178.89.65 port 45860 Mar 17 17:50:05.426800 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:05.433046 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:50:05.433293 systemd[1]: sshd@2-10.128.0.84:22-139.178.89.65:45860.service: Deactivated successfully. Mar 17 17:50:05.436187 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:50:05.439025 systemd-logind[1468]: Removed session 3. Mar 17 17:50:05.808354 ntpd[1450]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:54%2]:123 Mar 17 17:50:05.809026 ntpd[1450]: 17 Mar 17:50:05 ntpd[1450]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:54%2]:123 Mar 17 17:50:06.160618 kubelet[1664]: E0317 17:50:06.160542 1664 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:50:06.163677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:50:06.163956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:50:06.164538 systemd[1]: kubelet.service: Consumed 1.215s CPU time, 241.3M memory peak. Mar 17 17:50:15.490288 systemd[1]: Started sshd@3-10.128.0.84:22-139.178.89.65:58524.service - OpenSSH per-connection server daemon (139.178.89.65:58524). Mar 17 17:50:15.798827 sshd[1682]: Accepted publickey for core from 139.178.89.65 port 58524 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:15.800961 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:15.807552 systemd-logind[1468]: New session 4 of user core. Mar 17 17:50:15.815094 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:50:16.015809 sshd[1684]: Connection closed by 139.178.89.65 port 58524 Mar 17 17:50:16.016695 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:16.021388 systemd[1]: sshd@3-10.128.0.84:22-139.178.89.65:58524.service: Deactivated successfully. Mar 17 17:50:16.024576 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:50:16.027209 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:50:16.028649 systemd-logind[1468]: Removed session 4. Mar 17 17:50:16.077225 systemd[1]: Started sshd@4-10.128.0.84:22-139.178.89.65:58536.service - OpenSSH per-connection server daemon (139.178.89.65:58536). Mar 17 17:50:16.318833 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:50:16.328134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:16.381578 sshd[1690]: Accepted publickey for core from 139.178.89.65 port 58536 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:16.383384 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:16.389857 systemd-logind[1468]: New session 5 of user core. Mar 17 17:50:16.397039 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:50:16.592909 sshd[1695]: Connection closed by 139.178.89.65 port 58536 Mar 17 17:50:16.593081 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:16.608925 systemd[1]: sshd@4-10.128.0.84:22-139.178.89.65:58536.service: Deactivated successfully. Mar 17 17:50:16.619352 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:50:16.622692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:16.624540 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:50:16.628301 systemd-logind[1468]: Removed session 5. Mar 17 17:50:16.633632 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:50:16.648917 systemd[1]: Started sshd@5-10.128.0.84:22-139.178.89.65:58540.service - OpenSSH per-connection server daemon (139.178.89.65:58540). Mar 17 17:50:16.713881 kubelet[1703]: E0317 17:50:16.713810 1703 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:50:16.718580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:50:16.718963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:50:16.719506 systemd[1]: kubelet.service: Consumed 193ms CPU time, 96.5M memory peak. Mar 17 17:50:16.950748 sshd[1712]: Accepted publickey for core from 139.178.89.65 port 58540 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:16.952464 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:16.959596 systemd-logind[1468]: New session 6 of user core. Mar 17 17:50:16.967027 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:50:17.165901 sshd[1716]: Connection closed by 139.178.89.65 port 58540 Mar 17 17:50:17.166808 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:17.172115 systemd[1]: sshd@5-10.128.0.84:22-139.178.89.65:58540.service: Deactivated successfully. Mar 17 17:50:17.174455 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:50:17.175446 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:50:17.176916 systemd-logind[1468]: Removed session 6. Mar 17 17:50:17.225212 systemd[1]: Started sshd@6-10.128.0.84:22-139.178.89.65:58554.service - OpenSSH per-connection server daemon (139.178.89.65:58554). Mar 17 17:50:17.523060 sshd[1722]: Accepted publickey for core from 139.178.89.65 port 58554 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:17.524764 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:17.531858 systemd-logind[1468]: New session 7 of user core. Mar 17 17:50:17.543042 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:50:17.720927 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:50:17.721440 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:50:17.738744 sudo[1725]: pam_unix(sudo:session): session closed for user root Mar 17 17:50:17.782418 sshd[1724]: Connection closed by 139.178.89.65 port 58554 Mar 17 17:50:17.783999 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:17.789499 systemd[1]: sshd@6-10.128.0.84:22-139.178.89.65:58554.service: Deactivated successfully. Mar 17 17:50:17.791905 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:50:17.793071 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:50:17.794543 systemd-logind[1468]: Removed session 7. Mar 17 17:50:17.844184 systemd[1]: Started sshd@7-10.128.0.84:22-139.178.89.65:58570.service - OpenSSH per-connection server daemon (139.178.89.65:58570). Mar 17 17:50:18.145473 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 58570 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:18.147257 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:18.153312 systemd-logind[1468]: New session 8 of user core. Mar 17 17:50:18.164030 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:50:18.326148 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:50:18.326649 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:50:18.331922 sudo[1735]: pam_unix(sudo:session): session closed for user root Mar 17 17:50:18.346006 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:50:18.346492 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:50:18.362373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:50:18.401286 augenrules[1757]: No rules Mar 17 17:50:18.402469 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:50:18.402769 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:50:18.404529 sudo[1734]: pam_unix(sudo:session): session closed for user root Mar 17 17:50:18.447434 sshd[1733]: Connection closed by 139.178.89.65 port 58570 Mar 17 17:50:18.448244 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:18.452668 systemd[1]: sshd@7-10.128.0.84:22-139.178.89.65:58570.service: Deactivated successfully. Mar 17 17:50:18.455065 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:50:18.456936 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:50:18.458574 systemd-logind[1468]: Removed session 8. Mar 17 17:50:18.509205 systemd[1]: Started sshd@8-10.128.0.84:22-139.178.89.65:58578.service - OpenSSH per-connection server daemon (139.178.89.65:58578). Mar 17 17:50:18.800387 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 58578 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:50:18.802211 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:50:18.809323 systemd-logind[1468]: New session 9 of user core. Mar 17 17:50:18.824020 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:50:18.979886 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:50:18.980386 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:50:19.445517 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:50:19.448118 (dockerd)[1785]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:50:19.861113 dockerd[1785]: time="2025-03-17T17:50:19.860948973Z" level=info msg="Starting up" Mar 17 17:50:19.994010 dockerd[1785]: time="2025-03-17T17:50:19.993725402Z" level=info msg="Loading containers: start." Mar 17 17:50:20.208814 kernel: Initializing XFRM netlink socket Mar 17 17:50:20.322980 systemd-networkd[1390]: docker0: Link UP Mar 17 17:50:20.357507 dockerd[1785]: time="2025-03-17T17:50:20.357442482Z" level=info msg="Loading containers: done." Mar 17 17:50:20.376459 dockerd[1785]: time="2025-03-17T17:50:20.376395711Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:50:20.376640 dockerd[1785]: time="2025-03-17T17:50:20.376516375Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:50:20.376705 dockerd[1785]: time="2025-03-17T17:50:20.376667463Z" level=info msg="Daemon has completed initialization" Mar 17 17:50:20.417464 dockerd[1785]: time="2025-03-17T17:50:20.417399655Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:50:20.417817 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:50:21.461745 containerd[1481]: time="2025-03-17T17:50:21.461380087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:50:21.921138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518213516.mount: Deactivated successfully. Mar 17 17:50:23.556512 containerd[1481]: time="2025-03-17T17:50:23.556415005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:23.558142 containerd[1481]: time="2025-03-17T17:50:23.558069146Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32681201" Mar 17 17:50:23.559427 containerd[1481]: time="2025-03-17T17:50:23.559354715Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:23.564773 containerd[1481]: time="2025-03-17T17:50:23.563146131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:23.564773 containerd[1481]: time="2025-03-17T17:50:23.564511748Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.103073455s" Mar 17 17:50:23.564773 containerd[1481]: time="2025-03-17T17:50:23.564556585Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 17:50:23.594563 containerd[1481]: time="2025-03-17T17:50:23.594500663Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:50:25.156490 containerd[1481]: time="2025-03-17T17:50:25.156416315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:25.158253 containerd[1481]: time="2025-03-17T17:50:25.158191018Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29621706" Mar 17 17:50:25.159433 containerd[1481]: time="2025-03-17T17:50:25.159353272Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:25.165200 containerd[1481]: time="2025-03-17T17:50:25.165088833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:25.167950 containerd[1481]: time="2025-03-17T17:50:25.166651257Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 1.572096724s" Mar 17 17:50:25.167950 containerd[1481]: time="2025-03-17T17:50:25.166703580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 17:50:25.198998 containerd[1481]: time="2025-03-17T17:50:25.198939811Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:50:26.252037 containerd[1481]: time="2025-03-17T17:50:26.251963915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:26.253683 containerd[1481]: time="2025-03-17T17:50:26.253613006Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17905225" Mar 17 17:50:26.254921 containerd[1481]: time="2025-03-17T17:50:26.254876952Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:26.258421 containerd[1481]: time="2025-03-17T17:50:26.258359921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:26.260495 containerd[1481]: time="2025-03-17T17:50:26.259850641Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.0608623s" Mar 17 17:50:26.260495 containerd[1481]: time="2025-03-17T17:50:26.259894999Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 17:50:26.290648 containerd[1481]: time="2025-03-17T17:50:26.290580436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:50:26.882105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:50:26.888066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:27.248059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:27.251466 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:50:27.341292 kubelet[2067]: E0317 17:50:27.341236 2067 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:50:27.346828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:50:27.347096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:50:27.348086 systemd[1]: kubelet.service: Consumed 208ms CPU time, 97.4M memory peak. Mar 17 17:50:27.527342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500696807.mount: Deactivated successfully. Mar 17 17:50:28.100796 containerd[1481]: time="2025-03-17T17:50:28.100726141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:28.102188 containerd[1481]: time="2025-03-17T17:50:28.102119773Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29187267" Mar 17 17:50:28.103851 containerd[1481]: time="2025-03-17T17:50:28.103760838Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:28.106668 containerd[1481]: time="2025-03-17T17:50:28.106538766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:28.107945 containerd[1481]: time="2025-03-17T17:50:28.107413708Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.816784902s" Mar 17 17:50:28.107945 containerd[1481]: time="2025-03-17T17:50:28.107467799Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:50:28.135568 containerd[1481]: time="2025-03-17T17:50:28.135522599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:50:28.501531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157916976.mount: Deactivated successfully. Mar 17 17:50:29.525252 containerd[1481]: time="2025-03-17T17:50:29.525182405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:29.526886 containerd[1481]: time="2025-03-17T17:50:29.526816979Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Mar 17 17:50:29.528014 containerd[1481]: time="2025-03-17T17:50:29.527974613Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:29.532491 containerd[1481]: time="2025-03-17T17:50:29.532422649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:29.534397 containerd[1481]: time="2025-03-17T17:50:29.534349795Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.398763015s" Mar 17 17:50:29.534484 containerd[1481]: time="2025-03-17T17:50:29.534400675Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:50:29.564614 containerd[1481]: time="2025-03-17T17:50:29.564555595Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:50:29.914354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128701234.mount: Deactivated successfully. Mar 17 17:50:29.919745 containerd[1481]: time="2025-03-17T17:50:29.919685244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:29.920962 containerd[1481]: time="2025-03-17T17:50:29.920901789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Mar 17 17:50:29.922288 containerd[1481]: time="2025-03-17T17:50:29.922204188Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:29.925453 containerd[1481]: time="2025-03-17T17:50:29.925411991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:29.926767 containerd[1481]: time="2025-03-17T17:50:29.926601783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 361.999407ms" Mar 17 17:50:29.926767 containerd[1481]: time="2025-03-17T17:50:29.926647043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 17:50:29.957737 containerd[1481]: time="2025-03-17T17:50:29.957609478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:50:30.329696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968677086.mount: Deactivated successfully. Mar 17 17:50:32.521054 containerd[1481]: time="2025-03-17T17:50:32.520982611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:32.522701 containerd[1481]: time="2025-03-17T17:50:32.522628262Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Mar 17 17:50:32.523737 containerd[1481]: time="2025-03-17T17:50:32.523697251Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:32.530026 containerd[1481]: time="2025-03-17T17:50:32.529957250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:32.532682 containerd[1481]: time="2025-03-17T17:50:32.531542141Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.573881291s" Mar 17 17:50:32.532682 containerd[1481]: time="2025-03-17T17:50:32.531584846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 17:50:32.814237 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:50:35.740317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:35.740636 systemd[1]: kubelet.service: Consumed 208ms CPU time, 97.4M memory peak. Mar 17 17:50:35.754192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:35.786370 systemd[1]: Reload requested from client PID 2253 ('systemctl') (unit session-9.scope)... Mar 17 17:50:35.786387 systemd[1]: Reloading... Mar 17 17:50:35.968876 zram_generator::config[2301]: No configuration found. Mar 17 17:50:36.124178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:50:36.267413 systemd[1]: Reloading finished in 480 ms. Mar 17 17:50:36.335539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:36.350453 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:50:36.356080 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:36.358049 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:50:36.358402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:36.358494 systemd[1]: kubelet.service: Consumed 138ms CPU time, 84.6M memory peak. Mar 17 17:50:36.364165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:36.567844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:36.579324 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:50:36.633925 kubelet[2352]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:50:36.633925 kubelet[2352]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:50:36.633925 kubelet[2352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:50:36.634454 kubelet[2352]: I0317 17:50:36.634052 2352 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:50:37.020654 kubelet[2352]: I0317 17:50:37.020596 2352 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:50:37.020654 kubelet[2352]: I0317 17:50:37.020633 2352 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:50:37.021001 kubelet[2352]: I0317 17:50:37.020963 2352 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:50:37.046669 kubelet[2352]: I0317 17:50:37.046616 2352 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:50:37.048419 kubelet[2352]: E0317 17:50:37.048305 2352 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.070937 kubelet[2352]: I0317 17:50:37.070905 2352 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:50:37.075262 kubelet[2352]: I0317 17:50:37.075180 2352 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:50:37.075554 kubelet[2352]: I0317 17:50:37.075256 2352 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:50:37.076621 kubelet[2352]: I0317 17:50:37.076570 2352 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:50:37.076621 kubelet[2352]: I0317 17:50:37.076611 2352 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:50:37.076865 kubelet[2352]: I0317 17:50:37.076826 2352 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:50:37.078669 kubelet[2352]: I0317 17:50:37.078324 2352 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:50:37.078669 kubelet[2352]: I0317 17:50:37.078359 2352 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:50:37.078669 kubelet[2352]: I0317 17:50:37.078393 2352 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:50:37.078669 kubelet[2352]: I0317 17:50:37.078420 2352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:50:37.079173 kubelet[2352]: W0317 17:50:37.079092 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.079238 kubelet[2352]: E0317 17:50:37.079176 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.085236 kubelet[2352]: W0317 17:50:37.085015 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.085236 kubelet[2352]: E0317 17:50:37.085085 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.085688 kubelet[2352]: I0317 17:50:37.085565 2352 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:50:37.089148 kubelet[2352]: I0317 17:50:37.089120 2352 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:50:37.090494 kubelet[2352]: W0317 17:50:37.089323 2352 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:50:37.090494 kubelet[2352]: I0317 17:50:37.090161 2352 server.go:1264] "Started kubelet" Mar 17 17:50:37.092349 kubelet[2352]: I0317 17:50:37.092266 2352 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:50:37.095501 kubelet[2352]: I0317 17:50:37.094454 2352 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:50:37.097862 kubelet[2352]: I0317 17:50:37.097823 2352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:50:37.101828 kubelet[2352]: I0317 17:50:37.099892 2352 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:50:37.101828 kubelet[2352]: I0317 17:50:37.100207 2352 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:50:37.101828 kubelet[2352]: E0317 17:50:37.100425 2352 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal.182da871af6c4450 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,UID:ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,},FirstTimestamp:2025-03-17 17:50:37.090128976 +0000 UTC m=+0.505491157,LastTimestamp:2025-03-17 17:50:37.090128976 +0000 UTC m=+0.505491157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,}" Mar 17 17:50:37.107928 kubelet[2352]: I0317 17:50:37.107903 2352 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:50:37.110715 kubelet[2352]: E0317 17:50:37.110664 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="200ms" Mar 17 17:50:37.112060 kubelet[2352]: I0317 17:50:37.112020 2352 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:50:37.112173 kubelet[2352]: I0317 17:50:37.112078 2352 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:50:37.112628 kubelet[2352]: W0317 17:50:37.112568 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.112798 kubelet[2352]: E0317 17:50:37.112647 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.114436 kubelet[2352]: E0317 17:50:37.114411 2352 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:50:37.114663 kubelet[2352]: I0317 17:50:37.114487 2352 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:50:37.114663 kubelet[2352]: I0317 17:50:37.114595 2352 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:50:37.116639 kubelet[2352]: I0317 17:50:37.114925 2352 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:50:37.129056 kubelet[2352]: I0317 17:50:37.128994 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:50:37.131176 kubelet[2352]: I0317 17:50:37.131140 2352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:50:37.131290 kubelet[2352]: I0317 17:50:37.131186 2352 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:50:37.131290 kubelet[2352]: I0317 17:50:37.131210 2352 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:50:37.131390 kubelet[2352]: E0317 17:50:37.131278 2352 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:50:37.149693 kubelet[2352]: W0317 17:50:37.149610 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.149969 kubelet[2352]: E0317 17:50:37.149936 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:37.161816 kubelet[2352]: I0317 17:50:37.161752 2352 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:50:37.161963 kubelet[2352]: I0317 17:50:37.161868 2352 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:50:37.161963 kubelet[2352]: I0317 17:50:37.161924 2352 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:50:37.164805 kubelet[2352]: I0317 17:50:37.164759 2352 policy_none.go:49] "None policy: Start" Mar 17 17:50:37.165620 kubelet[2352]: I0317 17:50:37.165544 2352 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:50:37.165620 kubelet[2352]: I0317 17:50:37.165577 2352 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:50:37.179861 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:50:37.192267 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:50:37.196946 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:50:37.210341 kubelet[2352]: I0317 17:50:37.210303 2352 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:50:37.210666 kubelet[2352]: I0317 17:50:37.210612 2352 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:50:37.210853 kubelet[2352]: I0317 17:50:37.210831 2352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:50:37.213841 kubelet[2352]: E0317 17:50:37.213700 2352 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" not found" Mar 17 17:50:37.215690 kubelet[2352]: I0317 17:50:37.215587 2352 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.216574 kubelet[2352]: E0317 17:50:37.216420 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.231920 kubelet[2352]: I0317 17:50:37.231850 2352 topology_manager.go:215] "Topology Admit Handler" podUID="febe83d75126bd5206785a66783fa087" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.238002 kubelet[2352]: I0317 17:50:37.237946 2352 topology_manager.go:215] "Topology Admit Handler" podUID="7e0bc71d26b1a9e268c8f20be9ce925b" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.244891 kubelet[2352]: I0317 17:50:37.244698 2352 topology_manager.go:215] "Topology Admit Handler" podUID="4c95007f06ce4ecb17c73929cd424461" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.253184 systemd[1]: Created slice kubepods-burstable-podfebe83d75126bd5206785a66783fa087.slice - libcontainer container kubepods-burstable-podfebe83d75126bd5206785a66783fa087.slice. Mar 17 17:50:37.273305 systemd[1]: Created slice kubepods-burstable-pod7e0bc71d26b1a9e268c8f20be9ce925b.slice - libcontainer container kubepods-burstable-pod7e0bc71d26b1a9e268c8f20be9ce925b.slice. Mar 17 17:50:37.287234 systemd[1]: Created slice kubepods-burstable-pod4c95007f06ce4ecb17c73929cd424461.slice - libcontainer container kubepods-burstable-pod4c95007f06ce4ecb17c73929cd424461.slice. Mar 17 17:50:37.312368 kubelet[2352]: E0317 17:50:37.312309 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="400ms" Mar 17 17:50:37.312755 kubelet[2352]: I0317 17:50:37.312616 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/febe83d75126bd5206785a66783fa087-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"febe83d75126bd5206785a66783fa087\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.412924 kubelet[2352]: I0317 17:50:37.412822 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c95007f06ce4ecb17c73929cd424461-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"4c95007f06ce4ecb17c73929cd424461\") " pod="kube-system/kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.412924 kubelet[2352]: I0317 17:50:37.412893 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/febe83d75126bd5206785a66783fa087-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"febe83d75126bd5206785a66783fa087\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.412924 kubelet[2352]: I0317 17:50:37.412924 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.413207 kubelet[2352]: I0317 17:50:37.412958 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.413207 kubelet[2352]: I0317 17:50:37.412986 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.413207 kubelet[2352]: I0317 17:50:37.413014 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.413207 kubelet[2352]: I0317 17:50:37.413088 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/febe83d75126bd5206785a66783fa087-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"febe83d75126bd5206785a66783fa087\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.413393 kubelet[2352]: I0317 17:50:37.413121 2352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.421901 kubelet[2352]: I0317 17:50:37.421853 2352 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.422336 kubelet[2352]: E0317 17:50:37.422283 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.570691 containerd[1481]: time="2025-03-17T17:50:37.570243788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,Uid:febe83d75126bd5206785a66783fa087,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:37.586842 containerd[1481]: time="2025-03-17T17:50:37.586478444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,Uid:7e0bc71d26b1a9e268c8f20be9ce925b,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:37.590829 containerd[1481]: time="2025-03-17T17:50:37.590759610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,Uid:4c95007f06ce4ecb17c73929cd424461,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:37.713076 kubelet[2352]: E0317 17:50:37.713008 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="800ms" Mar 17 17:50:37.828079 kubelet[2352]: I0317 17:50:37.827927 2352 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:37.828664 kubelet[2352]: E0317 17:50:37.828605 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:38.140948 kubelet[2352]: W0317 17:50:38.140811 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:38.140948 kubelet[2352]: E0317 17:50:38.140901 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:38.141186 kubelet[2352]: W0317 17:50:38.141076 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:38.141186 kubelet[2352]: E0317 17:50:38.141112 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:38.514692 kubelet[2352]: E0317 17:50:38.514517 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="1.6s" Mar 17 17:50:38.576558 kubelet[2352]: W0317 17:50:38.576469 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:38.576558 kubelet[2352]: E0317 17:50:38.576559 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:38.636644 kubelet[2352]: I0317 17:50:38.636583 2352 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:38.637113 kubelet[2352]: E0317 17:50:38.637063 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:38.656793 kubelet[2352]: W0317 17:50:38.656700 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:38.656793 kubelet[2352]: E0317 17:50:38.656802 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:39.246668 kubelet[2352]: E0317 17:50:39.246609 2352 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:39.575842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63028161.mount: Deactivated successfully. Mar 17 17:50:39.583832 containerd[1481]: time="2025-03-17T17:50:39.583749721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:39.586993 containerd[1481]: time="2025-03-17T17:50:39.586932090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:39.589874 containerd[1481]: time="2025-03-17T17:50:39.589804804Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Mar 17 17:50:39.591016 containerd[1481]: time="2025-03-17T17:50:39.590953743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:50:39.593865 containerd[1481]: time="2025-03-17T17:50:39.593815654Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:39.595414 containerd[1481]: time="2025-03-17T17:50:39.595362504Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:39.596203 containerd[1481]: time="2025-03-17T17:50:39.596110365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:50:39.600469 containerd[1481]: time="2025-03-17T17:50:39.600404632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:50:39.602830 containerd[1481]: time="2025-03-17T17:50:39.602123448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.015521097s" Mar 17 17:50:39.603661 containerd[1481]: time="2025-03-17T17:50:39.603612775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.033217606s" Mar 17 17:50:39.605176 containerd[1481]: time="2025-03-17T17:50:39.605129659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.01422173s" Mar 17 17:50:39.794266 containerd[1481]: time="2025-03-17T17:50:39.793988102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:39.794266 containerd[1481]: time="2025-03-17T17:50:39.794056535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:39.794266 containerd[1481]: time="2025-03-17T17:50:39.794082527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:39.794266 containerd[1481]: time="2025-03-17T17:50:39.794192436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:39.796440 containerd[1481]: time="2025-03-17T17:50:39.791424188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:39.796440 containerd[1481]: time="2025-03-17T17:50:39.795748479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:39.796440 containerd[1481]: time="2025-03-17T17:50:39.795770715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:39.796440 containerd[1481]: time="2025-03-17T17:50:39.796141633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:39.796812 containerd[1481]: time="2025-03-17T17:50:39.796165243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:39.796812 containerd[1481]: time="2025-03-17T17:50:39.796239968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:39.796812 containerd[1481]: time="2025-03-17T17:50:39.796270028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:39.796812 containerd[1481]: time="2025-03-17T17:50:39.796378417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:39.839052 systemd[1]: Started cri-containerd-876bfb9093f9a5feacad4dbd1f9e645c94bdb55e77c3c1ea0e02ff5a0fca1799.scope - libcontainer container 876bfb9093f9a5feacad4dbd1f9e645c94bdb55e77c3c1ea0e02ff5a0fca1799. Mar 17 17:50:39.848310 systemd[1]: Started cri-containerd-220e13af5bff6d2f01e2f0df5680be6a547b21b674be46befd361eabb869314e.scope - libcontainer container 220e13af5bff6d2f01e2f0df5680be6a547b21b674be46befd361eabb869314e. Mar 17 17:50:39.853150 systemd[1]: Started cri-containerd-5406f51d6e6d170a87db46e0d05200d3041865d2b344a2da97239fe81bbe6666.scope - libcontainer container 5406f51d6e6d170a87db46e0d05200d3041865d2b344a2da97239fe81bbe6666. Mar 17 17:50:39.931042 containerd[1481]: time="2025-03-17T17:50:39.930563089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,Uid:febe83d75126bd5206785a66783fa087,Namespace:kube-system,Attempt:0,} returns sandbox id \"876bfb9093f9a5feacad4dbd1f9e645c94bdb55e77c3c1ea0e02ff5a0fca1799\"" Mar 17 17:50:39.936468 kubelet[2352]: E0317 17:50:39.935874 2352 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-21291" Mar 17 17:50:39.949217 containerd[1481]: time="2025-03-17T17:50:39.949159933Z" level=info msg="CreateContainer within sandbox \"876bfb9093f9a5feacad4dbd1f9e645c94bdb55e77c3c1ea0e02ff5a0fca1799\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:50:39.963874 containerd[1481]: time="2025-03-17T17:50:39.963523502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,Uid:7e0bc71d26b1a9e268c8f20be9ce925b,Namespace:kube-system,Attempt:0,} returns sandbox id \"220e13af5bff6d2f01e2f0df5680be6a547b21b674be46befd361eabb869314e\"" Mar 17 17:50:39.966424 kubelet[2352]: E0317 17:50:39.966284 2352 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flat" Mar 17 17:50:39.970082 containerd[1481]: time="2025-03-17T17:50:39.969913684Z" level=info msg="CreateContainer within sandbox \"220e13af5bff6d2f01e2f0df5680be6a547b21b674be46befd361eabb869314e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:50:39.975514 containerd[1481]: time="2025-03-17T17:50:39.975401364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal,Uid:4c95007f06ce4ecb17c73929cd424461,Namespace:kube-system,Attempt:0,} returns sandbox id \"5406f51d6e6d170a87db46e0d05200d3041865d2b344a2da97239fe81bbe6666\"" Mar 17 17:50:39.977472 kubelet[2352]: E0317 17:50:39.977360 2352 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-21291" Mar 17 17:50:39.980047 containerd[1481]: time="2025-03-17T17:50:39.979599786Z" level=info msg="CreateContainer within sandbox \"5406f51d6e6d170a87db46e0d05200d3041865d2b344a2da97239fe81bbe6666\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:50:39.987296 containerd[1481]: time="2025-03-17T17:50:39.987249050Z" level=info msg="CreateContainer within sandbox \"876bfb9093f9a5feacad4dbd1f9e645c94bdb55e77c3c1ea0e02ff5a0fca1799\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e66c3a6a8c6740591bfa850b0c1bba6948b196cc04c69db3f166c74b60582a53\"" Mar 17 17:50:39.988254 containerd[1481]: time="2025-03-17T17:50:39.988224613Z" level=info msg="StartContainer for \"e66c3a6a8c6740591bfa850b0c1bba6948b196cc04c69db3f166c74b60582a53\"" Mar 17 17:50:39.997078 containerd[1481]: time="2025-03-17T17:50:39.997033405Z" level=info msg="CreateContainer within sandbox \"220e13af5bff6d2f01e2f0df5680be6a547b21b674be46befd361eabb869314e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"468c47770e2dcfce224bf8eb0d0719dd940bfe28fd0853de4417fe342fdeecd7\"" Mar 17 17:50:39.999594 containerd[1481]: time="2025-03-17T17:50:39.998158281Z" level=info msg="StartContainer for \"468c47770e2dcfce224bf8eb0d0719dd940bfe28fd0853de4417fe342fdeecd7\"" Mar 17 17:50:40.004604 containerd[1481]: time="2025-03-17T17:50:40.004555212Z" level=info msg="CreateContainer within sandbox \"5406f51d6e6d170a87db46e0d05200d3041865d2b344a2da97239fe81bbe6666\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"26121c6c6f0ba3a518f818dc5b95472b6a2a19afe7b9d673793e0ab920d3e754\"" Mar 17 17:50:40.005162 containerd[1481]: time="2025-03-17T17:50:40.005130272Z" level=info msg="StartContainer for \"26121c6c6f0ba3a518f818dc5b95472b6a2a19afe7b9d673793e0ab920d3e754\"" Mar 17 17:50:40.037512 kubelet[2352]: W0317 17:50:40.037456 2352 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:40.037840 kubelet[2352]: E0317 17:50:40.037769 2352 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Mar 17 17:50:40.053040 systemd[1]: Started cri-containerd-e66c3a6a8c6740591bfa850b0c1bba6948b196cc04c69db3f166c74b60582a53.scope - libcontainer container e66c3a6a8c6740591bfa850b0c1bba6948b196cc04c69db3f166c74b60582a53. Mar 17 17:50:40.074054 systemd[1]: Started cri-containerd-468c47770e2dcfce224bf8eb0d0719dd940bfe28fd0853de4417fe342fdeecd7.scope - libcontainer container 468c47770e2dcfce224bf8eb0d0719dd940bfe28fd0853de4417fe342fdeecd7. Mar 17 17:50:40.086258 systemd[1]: Started cri-containerd-26121c6c6f0ba3a518f818dc5b95472b6a2a19afe7b9d673793e0ab920d3e754.scope - libcontainer container 26121c6c6f0ba3a518f818dc5b95472b6a2a19afe7b9d673793e0ab920d3e754. Mar 17 17:50:40.116062 kubelet[2352]: E0317 17:50:40.115899 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="3.2s" Mar 17 17:50:40.200171 containerd[1481]: time="2025-03-17T17:50:40.200119827Z" level=info msg="StartContainer for \"e66c3a6a8c6740591bfa850b0c1bba6948b196cc04c69db3f166c74b60582a53\" returns successfully" Mar 17 17:50:40.223077 containerd[1481]: time="2025-03-17T17:50:40.222958651Z" level=info msg="StartContainer for \"26121c6c6f0ba3a518f818dc5b95472b6a2a19afe7b9d673793e0ab920d3e754\" returns successfully" Mar 17 17:50:40.223077 containerd[1481]: time="2025-03-17T17:50:40.222972193Z" level=info msg="StartContainer for \"468c47770e2dcfce224bf8eb0d0719dd940bfe28fd0853de4417fe342fdeecd7\" returns successfully" Mar 17 17:50:40.243907 kubelet[2352]: I0317 17:50:40.243841 2352 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:40.244300 kubelet[2352]: E0317 17:50:40.244257 2352 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:43.082978 kubelet[2352]: I0317 17:50:43.082914 2352 apiserver.go:52] "Watching apiserver" Mar 17 17:50:43.112899 kubelet[2352]: I0317 17:50:43.112845 2352 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:50:43.125621 kubelet[2352]: E0317 17:50:43.125548 2352 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" not found Mar 17 17:50:43.320864 kubelet[2352]: E0317 17:50:43.320813 2352 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" not found" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:43.450144 kubelet[2352]: I0317 17:50:43.449147 2352 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:43.458796 kubelet[2352]: I0317 17:50:43.458688 2352 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:45.051973 systemd[1]: Reload requested from client PID 2628 ('systemctl') (unit session-9.scope)... Mar 17 17:50:45.051999 systemd[1]: Reloading... Mar 17 17:50:45.234874 zram_generator::config[2673]: No configuration found. Mar 17 17:50:45.433563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:50:45.629394 systemd[1]: Reloading finished in 576 ms. Mar 17 17:50:45.667688 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:45.684800 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:50:45.685162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:45.685230 systemd[1]: kubelet.service: Consumed 1.020s CPU time, 116.3M memory peak. Mar 17 17:50:45.692279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:50:45.915922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:50:45.929423 (kubelet)[2721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:50:46.013125 kubelet[2721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:50:46.013125 kubelet[2721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:50:46.013125 kubelet[2721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:50:46.013627 kubelet[2721]: I0317 17:50:46.013150 2721 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:50:46.022869 kubelet[2721]: I0317 17:50:46.021935 2721 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:50:46.022869 kubelet[2721]: I0317 17:50:46.021966 2721 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:50:46.022869 kubelet[2721]: I0317 17:50:46.022260 2721 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:50:46.025244 kubelet[2721]: I0317 17:50:46.025177 2721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:50:46.030811 kubelet[2721]: I0317 17:50:46.030107 2721 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:50:46.046701 kubelet[2721]: I0317 17:50:46.046651 2721 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:50:46.048094 kubelet[2721]: I0317 17:50:46.047304 2721 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:50:46.048094 kubelet[2721]: I0317 17:50:46.047352 2721 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:50:46.048094 kubelet[2721]: I0317 17:50:46.047632 2721 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:50:46.048094 kubelet[2721]: I0317 17:50:46.047649 2721 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:50:46.048685 kubelet[2721]: I0317 17:50:46.047709 2721 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:50:46.048685 kubelet[2721]: I0317 17:50:46.047921 2721 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:50:46.048685 kubelet[2721]: I0317 17:50:46.047944 2721 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:50:46.048685 kubelet[2721]: I0317 17:50:46.047975 2721 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:50:46.048685 kubelet[2721]: I0317 17:50:46.047996 2721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:50:46.054961 kubelet[2721]: I0317 17:50:46.054930 2721 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:50:46.055281 kubelet[2721]: I0317 17:50:46.055259 2721 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:50:46.059814 kubelet[2721]: I0317 17:50:46.058283 2721 server.go:1264] "Started kubelet" Mar 17 17:50:46.060134 kubelet[2721]: I0317 17:50:46.060118 2721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:50:46.068093 kubelet[2721]: I0317 17:50:46.068051 2721 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:50:46.071200 kubelet[2721]: I0317 17:50:46.071136 2721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:50:46.071705 kubelet[2721]: I0317 17:50:46.071678 2721 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:50:46.074656 kubelet[2721]: I0317 17:50:46.074630 2721 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:50:46.076182 sudo[2734]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:50:46.077239 kubelet[2721]: I0317 17:50:46.076182 2721 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:50:46.077239 kubelet[2721]: I0317 17:50:46.076330 2721 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:50:46.077239 kubelet[2721]: I0317 17:50:46.076532 2721 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:50:46.076749 sudo[2734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:50:46.081285 kubelet[2721]: E0317 17:50:46.081256 2721 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:50:46.086797 kubelet[2721]: I0317 17:50:46.086639 2721 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:50:46.108126 kubelet[2721]: I0317 17:50:46.108072 2721 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:50:46.109803 kubelet[2721]: I0317 17:50:46.108556 2721 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:50:46.126149 kubelet[2721]: I0317 17:50:46.126107 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:50:46.141205 kubelet[2721]: I0317 17:50:46.141161 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:50:46.141400 kubelet[2721]: I0317 17:50:46.141387 2721 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:50:46.141512 kubelet[2721]: I0317 17:50:46.141500 2721 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:50:46.141714 kubelet[2721]: E0317 17:50:46.141691 2721 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:50:46.187217 kubelet[2721]: I0317 17:50:46.187176 2721 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.204981 kubelet[2721]: I0317 17:50:46.204940 2721 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.206123 kubelet[2721]: I0317 17:50:46.206099 2721 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.236850 kubelet[2721]: I0317 17:50:46.236650 2721 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:50:46.236850 kubelet[2721]: I0317 17:50:46.236672 2721 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:50:46.236850 kubelet[2721]: I0317 17:50:46.236696 2721 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:50:46.237853 kubelet[2721]: I0317 17:50:46.237474 2721 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:50:46.237853 kubelet[2721]: I0317 17:50:46.237493 2721 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:50:46.237853 kubelet[2721]: I0317 17:50:46.237524 2721 policy_none.go:49] "None policy: Start" Mar 17 17:50:46.238909 kubelet[2721]: I0317 17:50:46.238661 2721 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:50:46.238909 kubelet[2721]: I0317 17:50:46.238711 2721 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:50:46.239617 kubelet[2721]: I0317 17:50:46.239398 2721 state_mem.go:75] "Updated machine memory state" Mar 17 17:50:46.242065 kubelet[2721]: E0317 17:50:46.241904 2721 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:50:46.248180 kubelet[2721]: I0317 17:50:46.248063 2721 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:50:46.250146 kubelet[2721]: I0317 17:50:46.249138 2721 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:50:46.251553 kubelet[2721]: I0317 17:50:46.251418 2721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:50:46.444251 kubelet[2721]: I0317 17:50:46.442886 2721 topology_manager.go:215] "Topology Admit Handler" podUID="febe83d75126bd5206785a66783fa087" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.444251 kubelet[2721]: I0317 17:50:46.443018 2721 topology_manager.go:215] "Topology Admit Handler" podUID="7e0bc71d26b1a9e268c8f20be9ce925b" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.444251 kubelet[2721]: I0317 17:50:46.443114 2721 topology_manager.go:215] "Topology Admit Handler" podUID="4c95007f06ce4ecb17c73929cd424461" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.463164 kubelet[2721]: W0317 17:50:46.462962 2721 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 17:50:46.463343 kubelet[2721]: W0317 17:50:46.463245 2721 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 17:50:46.463412 kubelet[2721]: W0317 17:50:46.463370 2721 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 17 17:50:46.478290 kubelet[2721]: I0317 17:50:46.477890 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.478290 kubelet[2721]: I0317 17:50:46.477946 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.478290 kubelet[2721]: I0317 17:50:46.477984 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.478290 kubelet[2721]: I0317 17:50:46.478018 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c95007f06ce4ecb17c73929cd424461-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"4c95007f06ce4ecb17c73929cd424461\") " pod="kube-system/kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.478596 kubelet[2721]: I0317 17:50:46.478061 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/febe83d75126bd5206785a66783fa087-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"febe83d75126bd5206785a66783fa087\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.478596 kubelet[2721]: I0317 17:50:46.478092 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.478596 kubelet[2721]: I0317 17:50:46.478122 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e0bc71d26b1a9e268c8f20be9ce925b-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"7e0bc71d26b1a9e268c8f20be9ce925b\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.478596 kubelet[2721]: I0317 17:50:46.478151 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/febe83d75126bd5206785a66783fa087-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"febe83d75126bd5206785a66783fa087\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.478845 kubelet[2721]: I0317 17:50:46.478180 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/febe83d75126bd5206785a66783fa087-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" (UID: \"febe83d75126bd5206785a66783fa087\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" Mar 17 17:50:46.876241 sudo[2734]: pam_unix(sudo:session): session closed for user root Mar 17 17:50:47.057631 kubelet[2721]: I0317 17:50:47.057309 2721 apiserver.go:52] "Watching apiserver" Mar 17 17:50:47.077079 kubelet[2721]: I0317 17:50:47.076993 2721 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:50:47.184923 update_engine[1470]: I20250317 17:50:47.184837 1470 update_attempter.cc:509] Updating boot flags... Mar 17 17:50:47.224567 kubelet[2721]: I0317 17:50:47.224499 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" podStartSLOduration=1.224456858 podStartE2EDuration="1.224456858s" podCreationTimestamp="2025-03-17 17:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:50:47.224034188 +0000 UTC m=+1.287769039" watchObservedRunningTime="2025-03-17 17:50:47.224456858 +0000 UTC m=+1.288191699" Mar 17 17:50:47.277296 kubelet[2721]: I0317 17:50:47.277207 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" podStartSLOduration=1.2771818050000001 podStartE2EDuration="1.277181805s" podCreationTimestamp="2025-03-17 17:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:50:47.2491098 +0000 UTC m=+1.312844652" watchObservedRunningTime="2025-03-17 17:50:47.277181805 +0000 UTC m=+1.340916656" Mar 17 17:50:47.278446 kubelet[2721]: I0317 17:50:47.277628 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" podStartSLOduration=1.27761262 podStartE2EDuration="1.27761262s" podCreationTimestamp="2025-03-17 17:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:50:47.273207387 +0000 UTC m=+1.336942238" watchObservedRunningTime="2025-03-17 17:50:47.27761262 +0000 UTC m=+1.341347478" Mar 17 17:50:47.363813 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2777) Mar 17 17:50:47.576885 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2778) Mar 17 17:50:49.343325 sudo[1769]: pam_unix(sudo:session): session closed for user root Mar 17 17:50:49.386162 sshd[1768]: Connection closed by 139.178.89.65 port 58578 Mar 17 17:50:49.387089 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Mar 17 17:50:49.393464 systemd[1]: sshd@8-10.128.0.84:22-139.178.89.65:58578.service: Deactivated successfully. Mar 17 17:50:49.396853 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:50:49.397156 systemd[1]: session-9.scope: Consumed 6.722s CPU time, 295.2M memory peak. Mar 17 17:50:49.398861 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:50:49.400421 systemd-logind[1468]: Removed session 9. Mar 17 17:50:58.245122 kubelet[2721]: I0317 17:50:58.245081 2721 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:50:58.245809 containerd[1481]: time="2025-03-17T17:50:58.245708046Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:50:58.247384 kubelet[2721]: I0317 17:50:58.246658 2721 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:50:58.718227 kubelet[2721]: I0317 17:50:58.718139 2721 topology_manager.go:215] "Topology Admit Handler" podUID="1ad11c89-ad82-47fd-8d5f-250e61805e0c" podNamespace="kube-system" podName="kube-proxy-8t2kr" Mar 17 17:50:58.733597 systemd[1]: Created slice kubepods-besteffort-pod1ad11c89_ad82_47fd_8d5f_250e61805e0c.slice - libcontainer container kubepods-besteffort-pod1ad11c89_ad82_47fd_8d5f_250e61805e0c.slice. Mar 17 17:50:58.751405 kubelet[2721]: I0317 17:50:58.750099 2721 topology_manager.go:215] "Topology Admit Handler" podUID="4f17b226-cf4f-474d-8289-5da60396aaab" podNamespace="kube-system" podName="cilium-nvpg5" Mar 17 17:50:58.756803 kubelet[2721]: I0317 17:50:58.754605 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ad11c89-ad82-47fd-8d5f-250e61805e0c-xtables-lock\") pod \"kube-proxy-8t2kr\" (UID: \"1ad11c89-ad82-47fd-8d5f-250e61805e0c\") " pod="kube-system/kube-proxy-8t2kr" Mar 17 17:50:58.756803 kubelet[2721]: I0317 17:50:58.754663 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad11c89-ad82-47fd-8d5f-250e61805e0c-lib-modules\") pod \"kube-proxy-8t2kr\" (UID: \"1ad11c89-ad82-47fd-8d5f-250e61805e0c\") " pod="kube-system/kube-proxy-8t2kr" Mar 17 17:50:58.756803 kubelet[2721]: I0317 17:50:58.754698 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ad11c89-ad82-47fd-8d5f-250e61805e0c-kube-proxy\") pod \"kube-proxy-8t2kr\" (UID: \"1ad11c89-ad82-47fd-8d5f-250e61805e0c\") " pod="kube-system/kube-proxy-8t2kr" Mar 17 17:50:58.756803 kubelet[2721]: I0317 17:50:58.754726 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56n5n\" (UniqueName: \"kubernetes.io/projected/1ad11c89-ad82-47fd-8d5f-250e61805e0c-kube-api-access-56n5n\") pod \"kube-proxy-8t2kr\" (UID: \"1ad11c89-ad82-47fd-8d5f-250e61805e0c\") " pod="kube-system/kube-proxy-8t2kr" Mar 17 17:50:58.766376 systemd[1]: Created slice kubepods-burstable-pod4f17b226_cf4f_474d_8289_5da60396aaab.slice - libcontainer container kubepods-burstable-pod4f17b226_cf4f_474d_8289_5da60396aaab.slice. Mar 17 17:50:58.855606 kubelet[2721]: I0317 17:50:58.855552 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-config-path\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.855606 kubelet[2721]: I0317 17:50:58.855604 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-hostproc\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.855861 kubelet[2721]: I0317 17:50:58.855629 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-xtables-lock\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.855861 kubelet[2721]: I0317 17:50:58.855657 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-host-proc-sys-net\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.855861 kubelet[2721]: I0317 17:50:58.855687 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-run\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.855861 kubelet[2721]: I0317 17:50:58.855713 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4ddk\" (UniqueName: \"kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-kube-api-access-j4ddk\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.855861 kubelet[2721]: I0317 17:50:58.855804 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-lib-modules\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.855861 kubelet[2721]: I0317 17:50:58.855839 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-host-proc-sys-kernel\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.856156 kubelet[2721]: I0317 17:50:58.855864 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-bpf-maps\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.856156 kubelet[2721]: I0317 17:50:58.855892 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-etc-cni-netd\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.856156 kubelet[2721]: I0317 17:50:58.855941 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f17b226-cf4f-474d-8289-5da60396aaab-clustermesh-secrets\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.856156 kubelet[2721]: I0317 17:50:58.855970 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-hubble-tls\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.856156 kubelet[2721]: I0317 17:50:58.856023 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-cgroup\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.856156 kubelet[2721]: I0317 17:50:58.856054 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cni-path\") pod \"cilium-nvpg5\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " pod="kube-system/cilium-nvpg5" Mar 17 17:50:58.862351 kubelet[2721]: E0317 17:50:58.862315 2721 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:50:58.862351 kubelet[2721]: E0317 17:50:58.862351 2721 projected.go:200] Error preparing data for projected volume kube-api-access-56n5n for pod kube-system/kube-proxy-8t2kr: configmap "kube-root-ca.crt" not found Mar 17 17:50:58.862605 kubelet[2721]: E0317 17:50:58.862430 2721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ad11c89-ad82-47fd-8d5f-250e61805e0c-kube-api-access-56n5n podName:1ad11c89-ad82-47fd-8d5f-250e61805e0c nodeName:}" failed. No retries permitted until 2025-03-17 17:50:59.362404705 +0000 UTC m=+13.426139533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-56n5n" (UniqueName: "kubernetes.io/projected/1ad11c89-ad82-47fd-8d5f-250e61805e0c-kube-api-access-56n5n") pod "kube-proxy-8t2kr" (UID: "1ad11c89-ad82-47fd-8d5f-250e61805e0c") : configmap "kube-root-ca.crt" not found Mar 17 17:50:58.975519 kubelet[2721]: E0317 17:50:58.975371 2721 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:50:58.979484 kubelet[2721]: E0317 17:50:58.976857 2721 projected.go:200] Error preparing data for projected volume kube-api-access-j4ddk for pod kube-system/cilium-nvpg5: configmap "kube-root-ca.crt" not found Mar 17 17:50:58.979484 kubelet[2721]: E0317 17:50:58.976999 2721 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-kube-api-access-j4ddk podName:4f17b226-cf4f-474d-8289-5da60396aaab nodeName:}" failed. No retries permitted until 2025-03-17 17:50:59.476952294 +0000 UTC m=+13.540687145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j4ddk" (UniqueName: "kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-kube-api-access-j4ddk") pod "cilium-nvpg5" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab") : configmap "kube-root-ca.crt" not found Mar 17 17:50:59.357756 kubelet[2721]: I0317 17:50:59.357157 2721 topology_manager.go:215] "Topology Admit Handler" podUID="3e080c2d-91f0-4acf-ade8-628bce5b23f8" podNamespace="kube-system" podName="cilium-operator-599987898-x6dkc" Mar 17 17:50:59.372672 systemd[1]: Created slice kubepods-besteffort-pod3e080c2d_91f0_4acf_ade8_628bce5b23f8.slice - libcontainer container kubepods-besteffort-pod3e080c2d_91f0_4acf_ade8_628bce5b23f8.slice. Mar 17 17:50:59.461361 kubelet[2721]: I0317 17:50:59.461315 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zf8\" (UniqueName: \"kubernetes.io/projected/3e080c2d-91f0-4acf-ade8-628bce5b23f8-kube-api-access-48zf8\") pod \"cilium-operator-599987898-x6dkc\" (UID: \"3e080c2d-91f0-4acf-ade8-628bce5b23f8\") " pod="kube-system/cilium-operator-599987898-x6dkc" Mar 17 17:50:59.461720 kubelet[2721]: I0317 17:50:59.461690 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e080c2d-91f0-4acf-ade8-628bce5b23f8-cilium-config-path\") pod \"cilium-operator-599987898-x6dkc\" (UID: \"3e080c2d-91f0-4acf-ade8-628bce5b23f8\") " pod="kube-system/cilium-operator-599987898-x6dkc" Mar 17 17:50:59.646228 containerd[1481]: time="2025-03-17T17:50:59.645965869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8t2kr,Uid:1ad11c89-ad82-47fd-8d5f-250e61805e0c,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:59.673993 containerd[1481]: time="2025-03-17T17:50:59.673930823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nvpg5,Uid:4f17b226-cf4f-474d-8289-5da60396aaab,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:59.679431 containerd[1481]: time="2025-03-17T17:50:59.678256903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:59.679575 containerd[1481]: time="2025-03-17T17:50:59.679436709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:59.679575 containerd[1481]: time="2025-03-17T17:50:59.679516841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:59.680887 containerd[1481]: time="2025-03-17T17:50:59.679708562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:59.682247 containerd[1481]: time="2025-03-17T17:50:59.682115565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-x6dkc,Uid:3e080c2d-91f0-4acf-ade8-628bce5b23f8,Namespace:kube-system,Attempt:0,}" Mar 17 17:50:59.710009 systemd[1]: Started cri-containerd-d95a0c276dea496ddbcb8fd12dfa632308964c2c7104fc4ca8a6ac75e24ed459.scope - libcontainer container d95a0c276dea496ddbcb8fd12dfa632308964c2c7104fc4ca8a6ac75e24ed459. Mar 17 17:50:59.741182 containerd[1481]: time="2025-03-17T17:50:59.741051422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:59.742820 containerd[1481]: time="2025-03-17T17:50:59.741860705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:59.742820 containerd[1481]: time="2025-03-17T17:50:59.741898012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:59.742820 containerd[1481]: time="2025-03-17T17:50:59.742052752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:59.744598 containerd[1481]: time="2025-03-17T17:50:59.743558410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:59.744598 containerd[1481]: time="2025-03-17T17:50:59.743620576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:59.744598 containerd[1481]: time="2025-03-17T17:50:59.743639699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:59.745376 containerd[1481]: time="2025-03-17T17:50:59.744482478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:59.776067 systemd[1]: Started cri-containerd-23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c.scope - libcontainer container 23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c. Mar 17 17:50:59.804156 systemd[1]: Started cri-containerd-1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5.scope - libcontainer container 1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5. Mar 17 17:50:59.810237 containerd[1481]: time="2025-03-17T17:50:59.810079235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8t2kr,Uid:1ad11c89-ad82-47fd-8d5f-250e61805e0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d95a0c276dea496ddbcb8fd12dfa632308964c2c7104fc4ca8a6ac75e24ed459\"" Mar 17 17:50:59.815440 containerd[1481]: time="2025-03-17T17:50:59.815382331Z" level=info msg="CreateContainer within sandbox \"d95a0c276dea496ddbcb8fd12dfa632308964c2c7104fc4ca8a6ac75e24ed459\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:50:59.841812 containerd[1481]: time="2025-03-17T17:50:59.841733437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nvpg5,Uid:4f17b226-cf4f-474d-8289-5da60396aaab,Namespace:kube-system,Attempt:0,} returns sandbox id \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\"" Mar 17 17:50:59.848131 containerd[1481]: time="2025-03-17T17:50:59.848096055Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:50:59.858941 containerd[1481]: time="2025-03-17T17:50:59.858888506Z" level=info msg="CreateContainer within sandbox \"d95a0c276dea496ddbcb8fd12dfa632308964c2c7104fc4ca8a6ac75e24ed459\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a02367730c8cec1063d39189fb6fe40881209d78f79a4e43b646ec729b57e739\"" Mar 17 17:50:59.860931 containerd[1481]: time="2025-03-17T17:50:59.860882052Z" level=info msg="StartContainer for \"a02367730c8cec1063d39189fb6fe40881209d78f79a4e43b646ec729b57e739\"" Mar 17 17:50:59.915422 systemd[1]: Started cri-containerd-a02367730c8cec1063d39189fb6fe40881209d78f79a4e43b646ec729b57e739.scope - libcontainer container a02367730c8cec1063d39189fb6fe40881209d78f79a4e43b646ec729b57e739. Mar 17 17:50:59.918424 containerd[1481]: time="2025-03-17T17:50:59.918377217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-x6dkc,Uid:3e080c2d-91f0-4acf-ade8-628bce5b23f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5\"" Mar 17 17:50:59.961267 containerd[1481]: time="2025-03-17T17:50:59.961106163Z" level=info msg="StartContainer for \"a02367730c8cec1063d39189fb6fe40881209d78f79a4e43b646ec729b57e739\" returns successfully" Mar 17 17:51:05.564173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906233021.mount: Deactivated successfully. Mar 17 17:51:06.167387 kubelet[2721]: I0317 17:51:06.167268 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8t2kr" podStartSLOduration=8.167238148 podStartE2EDuration="8.167238148s" podCreationTimestamp="2025-03-17 17:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:51:00.224019507 +0000 UTC m=+14.287754358" watchObservedRunningTime="2025-03-17 17:51:06.167238148 +0000 UTC m=+20.230973019" Mar 17 17:51:08.264689 containerd[1481]: time="2025-03-17T17:51:08.264620113Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:08.266161 containerd[1481]: time="2025-03-17T17:51:08.266088166Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 17:51:08.267285 containerd[1481]: time="2025-03-17T17:51:08.267244260Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:08.270850 containerd[1481]: time="2025-03-17T17:51:08.270005850Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.421731119s" Mar 17 17:51:08.270850 containerd[1481]: time="2025-03-17T17:51:08.270048717Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 17:51:08.272848 containerd[1481]: time="2025-03-17T17:51:08.272523716Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:51:08.273993 containerd[1481]: time="2025-03-17T17:51:08.273959610Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:51:08.295065 containerd[1481]: time="2025-03-17T17:51:08.295014158Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\"" Mar 17 17:51:08.296696 containerd[1481]: time="2025-03-17T17:51:08.295993410Z" level=info msg="StartContainer for \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\"" Mar 17 17:51:08.342980 systemd[1]: Started cri-containerd-92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f.scope - libcontainer container 92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f. Mar 17 17:51:08.379018 containerd[1481]: time="2025-03-17T17:51:08.378939756Z" level=info msg="StartContainer for \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\" returns successfully" Mar 17 17:51:08.399405 systemd[1]: cri-containerd-92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f.scope: Deactivated successfully. Mar 17 17:51:09.287977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f-rootfs.mount: Deactivated successfully. Mar 17 17:51:10.219351 containerd[1481]: time="2025-03-17T17:51:10.219162021Z" level=info msg="shim disconnected" id=92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f namespace=k8s.io Mar 17 17:51:10.219351 containerd[1481]: time="2025-03-17T17:51:10.219271512Z" level=warning msg="cleaning up after shim disconnected" id=92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f namespace=k8s.io Mar 17 17:51:10.219351 containerd[1481]: time="2025-03-17T17:51:10.219287503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:51:10.805923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594140371.mount: Deactivated successfully. Mar 17 17:51:11.239996 containerd[1481]: time="2025-03-17T17:51:11.239880343Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:51:11.266200 containerd[1481]: time="2025-03-17T17:51:11.266012932Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\"" Mar 17 17:51:11.267008 containerd[1481]: time="2025-03-17T17:51:11.266966590Z" level=info msg="StartContainer for \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\"" Mar 17 17:51:11.343188 systemd[1]: Started cri-containerd-0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e.scope - libcontainer container 0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e. Mar 17 17:51:11.403504 containerd[1481]: time="2025-03-17T17:51:11.403352255Z" level=info msg="StartContainer for \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\" returns successfully" Mar 17 17:51:11.430416 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:51:11.430863 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:11.432607 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:51:11.439905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:51:11.440439 systemd[1]: cri-containerd-0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e.scope: Deactivated successfully. Mar 17 17:51:11.479670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:11.639348 containerd[1481]: time="2025-03-17T17:51:11.639249366Z" level=info msg="shim disconnected" id=0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e namespace=k8s.io Mar 17 17:51:11.639348 containerd[1481]: time="2025-03-17T17:51:11.639322574Z" level=warning msg="cleaning up after shim disconnected" id=0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e namespace=k8s.io Mar 17 17:51:11.639348 containerd[1481]: time="2025-03-17T17:51:11.639336006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:51:11.729069 containerd[1481]: time="2025-03-17T17:51:11.728989932Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:11.730400 containerd[1481]: time="2025-03-17T17:51:11.730341657Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 17:51:11.732132 containerd[1481]: time="2025-03-17T17:51:11.732081002Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:51:11.734394 containerd[1481]: time="2025-03-17T17:51:11.733572278Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.460991896s" Mar 17 17:51:11.734394 containerd[1481]: time="2025-03-17T17:51:11.733618720Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 17:51:11.736759 containerd[1481]: time="2025-03-17T17:51:11.736721223Z" level=info msg="CreateContainer within sandbox \"1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:51:11.753535 containerd[1481]: time="2025-03-17T17:51:11.753486116Z" level=info msg="CreateContainer within sandbox \"1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\"" Mar 17 17:51:11.755807 containerd[1481]: time="2025-03-17T17:51:11.754000613Z" level=info msg="StartContainer for \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\"" Mar 17 17:51:11.794038 systemd[1]: Started cri-containerd-96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc.scope - libcontainer container 96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc. Mar 17 17:51:11.802019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e-rootfs.mount: Deactivated successfully. Mar 17 17:51:11.832595 containerd[1481]: time="2025-03-17T17:51:11.832422831Z" level=info msg="StartContainer for \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\" returns successfully" Mar 17 17:51:12.252423 containerd[1481]: time="2025-03-17T17:51:12.252305884Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:51:12.287652 kubelet[2721]: I0317 17:51:12.286881 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-x6dkc" podStartSLOduration=1.472127316 podStartE2EDuration="13.286854235s" podCreationTimestamp="2025-03-17 17:50:59 +0000 UTC" firstStartedPulling="2025-03-17 17:50:59.920115968 +0000 UTC m=+13.983850801" lastFinishedPulling="2025-03-17 17:51:11.734842872 +0000 UTC m=+25.798577720" observedRunningTime="2025-03-17 17:51:12.283248483 +0000 UTC m=+26.346983335" watchObservedRunningTime="2025-03-17 17:51:12.286854235 +0000 UTC m=+26.350589077" Mar 17 17:51:12.291924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419879113.mount: Deactivated successfully. Mar 17 17:51:12.296125 containerd[1481]: time="2025-03-17T17:51:12.295405056Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\"" Mar 17 17:51:12.297502 containerd[1481]: time="2025-03-17T17:51:12.297464082Z" level=info msg="StartContainer for \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\"" Mar 17 17:51:12.378591 systemd[1]: Started cri-containerd-0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174.scope - libcontainer container 0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174. Mar 17 17:51:12.558333 containerd[1481]: time="2025-03-17T17:51:12.558124413Z" level=info msg="StartContainer for \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\" returns successfully" Mar 17 17:51:12.564656 systemd[1]: cri-containerd-0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174.scope: Deactivated successfully. Mar 17 17:51:12.643318 containerd[1481]: time="2025-03-17T17:51:12.643014129Z" level=info msg="shim disconnected" id=0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174 namespace=k8s.io Mar 17 17:51:12.643318 containerd[1481]: time="2025-03-17T17:51:12.643263522Z" level=warning msg="cleaning up after shim disconnected" id=0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174 namespace=k8s.io Mar 17 17:51:12.643318 containerd[1481]: time="2025-03-17T17:51:12.643282622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:51:12.801642 systemd[1]: run-containerd-runc-k8s.io-0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174-runc.iNKXLF.mount: Deactivated successfully. Mar 17 17:51:12.801912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174-rootfs.mount: Deactivated successfully. Mar 17 17:51:13.255108 containerd[1481]: time="2025-03-17T17:51:13.255049474Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:51:13.283585 containerd[1481]: time="2025-03-17T17:51:13.282602100Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\"" Mar 17 17:51:13.285513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158318272.mount: Deactivated successfully. Mar 17 17:51:13.286113 containerd[1481]: time="2025-03-17T17:51:13.285991248Z" level=info msg="StartContainer for \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\"" Mar 17 17:51:13.342045 systemd[1]: Started cri-containerd-bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc.scope - libcontainer container bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc. Mar 17 17:51:13.388374 systemd[1]: cri-containerd-bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc.scope: Deactivated successfully. Mar 17 17:51:13.390646 containerd[1481]: time="2025-03-17T17:51:13.390601681Z" level=info msg="StartContainer for \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\" returns successfully" Mar 17 17:51:13.421674 containerd[1481]: time="2025-03-17T17:51:13.421599836Z" level=info msg="shim disconnected" id=bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc namespace=k8s.io Mar 17 17:51:13.421674 containerd[1481]: time="2025-03-17T17:51:13.421670155Z" level=warning msg="cleaning up after shim disconnected" id=bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc namespace=k8s.io Mar 17 17:51:13.422073 containerd[1481]: time="2025-03-17T17:51:13.421683446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:51:13.439353 containerd[1481]: time="2025-03-17T17:51:13.439291145Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:51:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:51:13.797345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc-rootfs.mount: Deactivated successfully. Mar 17 17:51:14.269688 containerd[1481]: time="2025-03-17T17:51:14.269633857Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:51:14.292309 containerd[1481]: time="2025-03-17T17:51:14.292124378Z" level=info msg="CreateContainer within sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\"" Mar 17 17:51:14.294024 containerd[1481]: time="2025-03-17T17:51:14.293629580Z" level=info msg="StartContainer for \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\"" Mar 17 17:51:14.338752 systemd[1]: run-containerd-runc-k8s.io-b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28-runc.zgCCGZ.mount: Deactivated successfully. Mar 17 17:51:14.347046 systemd[1]: Started cri-containerd-b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28.scope - libcontainer container b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28. Mar 17 17:51:14.388003 containerd[1481]: time="2025-03-17T17:51:14.387945195Z" level=info msg="StartContainer for \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\" returns successfully" Mar 17 17:51:14.498817 kubelet[2721]: I0317 17:51:14.497834 2721 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:51:14.531034 kubelet[2721]: I0317 17:51:14.530442 2721 topology_manager.go:215] "Topology Admit Handler" podUID="d094387d-fdb6-472c-9f23-6cbcc8e522f8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rwtxv" Mar 17 17:51:14.534067 kubelet[2721]: I0317 17:51:14.534028 2721 topology_manager.go:215] "Topology Admit Handler" podUID="6caefc63-a740-4b28-ab1c-b16c5b819455" podNamespace="kube-system" podName="coredns-7db6d8ff4d-swrdq" Mar 17 17:51:14.544029 systemd[1]: Created slice kubepods-burstable-podd094387d_fdb6_472c_9f23_6cbcc8e522f8.slice - libcontainer container kubepods-burstable-podd094387d_fdb6_472c_9f23_6cbcc8e522f8.slice. Mar 17 17:51:14.560266 systemd[1]: Created slice kubepods-burstable-pod6caefc63_a740_4b28_ab1c_b16c5b819455.slice - libcontainer container kubepods-burstable-pod6caefc63_a740_4b28_ab1c_b16c5b819455.slice. Mar 17 17:51:14.564757 kubelet[2721]: I0317 17:51:14.564706 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6caefc63-a740-4b28-ab1c-b16c5b819455-config-volume\") pod \"coredns-7db6d8ff4d-swrdq\" (UID: \"6caefc63-a740-4b28-ab1c-b16c5b819455\") " pod="kube-system/coredns-7db6d8ff4d-swrdq" Mar 17 17:51:14.564757 kubelet[2721]: I0317 17:51:14.564754 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d094387d-fdb6-472c-9f23-6cbcc8e522f8-config-volume\") pod \"coredns-7db6d8ff4d-rwtxv\" (UID: \"d094387d-fdb6-472c-9f23-6cbcc8e522f8\") " pod="kube-system/coredns-7db6d8ff4d-rwtxv" Mar 17 17:51:14.565079 kubelet[2721]: I0317 17:51:14.564804 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg59b\" (UniqueName: \"kubernetes.io/projected/d094387d-fdb6-472c-9f23-6cbcc8e522f8-kube-api-access-wg59b\") pod \"coredns-7db6d8ff4d-rwtxv\" (UID: \"d094387d-fdb6-472c-9f23-6cbcc8e522f8\") " pod="kube-system/coredns-7db6d8ff4d-rwtxv" Mar 17 17:51:14.565079 kubelet[2721]: I0317 17:51:14.564840 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dbf\" (UniqueName: \"kubernetes.io/projected/6caefc63-a740-4b28-ab1c-b16c5b819455-kube-api-access-t6dbf\") pod \"coredns-7db6d8ff4d-swrdq\" (UID: \"6caefc63-a740-4b28-ab1c-b16c5b819455\") " pod="kube-system/coredns-7db6d8ff4d-swrdq" Mar 17 17:51:14.853849 containerd[1481]: time="2025-03-17T17:51:14.853201272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rwtxv,Uid:d094387d-fdb6-472c-9f23-6cbcc8e522f8,Namespace:kube-system,Attempt:0,}" Mar 17 17:51:14.873386 containerd[1481]: time="2025-03-17T17:51:14.871102591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-swrdq,Uid:6caefc63-a740-4b28-ab1c-b16c5b819455,Namespace:kube-system,Attempt:0,}" Mar 17 17:51:15.293251 kubelet[2721]: I0317 17:51:15.292468 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nvpg5" podStartSLOduration=8.867576685 podStartE2EDuration="17.292442787s" podCreationTimestamp="2025-03-17 17:50:58 +0000 UTC" firstStartedPulling="2025-03-17 17:50:59.846633073 +0000 UTC m=+13.910367911" lastFinishedPulling="2025-03-17 17:51:08.271499185 +0000 UTC m=+22.335234013" observedRunningTime="2025-03-17 17:51:15.291947309 +0000 UTC m=+29.355682185" watchObservedRunningTime="2025-03-17 17:51:15.292442787 +0000 UTC m=+29.356177635" Mar 17 17:51:16.726568 systemd-networkd[1390]: cilium_host: Link UP Mar 17 17:51:16.726905 systemd-networkd[1390]: cilium_net: Link UP Mar 17 17:51:16.727211 systemd-networkd[1390]: cilium_net: Gained carrier Mar 17 17:51:16.727503 systemd-networkd[1390]: cilium_host: Gained carrier Mar 17 17:51:16.876817 systemd-networkd[1390]: cilium_vxlan: Link UP Mar 17 17:51:16.877027 systemd-networkd[1390]: cilium_vxlan: Gained carrier Mar 17 17:51:16.926121 systemd-networkd[1390]: cilium_host: Gained IPv6LL Mar 17 17:51:17.142039 systemd-networkd[1390]: cilium_net: Gained IPv6LL Mar 17 17:51:17.171132 kernel: NET: Registered PF_ALG protocol family Mar 17 17:51:18.056462 systemd-networkd[1390]: lxc_health: Link UP Mar 17 17:51:18.064144 systemd-networkd[1390]: lxc_health: Gained carrier Mar 17 17:51:18.442576 systemd-networkd[1390]: lxcb4e6a77ef4e1: Link UP Mar 17 17:51:18.453665 kernel: eth0: renamed from tmp978c3 Mar 17 17:51:18.462973 systemd-networkd[1390]: lxcb4e6a77ef4e1: Gained carrier Mar 17 17:51:18.489874 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL Mar 17 17:51:18.490221 systemd-networkd[1390]: lxc212cd7d2b7c8: Link UP Mar 17 17:51:18.501046 kernel: eth0: renamed from tmp923f4 Mar 17 17:51:18.509711 systemd-networkd[1390]: lxc212cd7d2b7c8: Gained carrier Mar 17 17:51:19.510505 systemd-networkd[1390]: lxc_health: Gained IPv6LL Mar 17 17:51:19.638239 systemd-networkd[1390]: lxcb4e6a77ef4e1: Gained IPv6LL Mar 17 17:51:20.278230 systemd-networkd[1390]: lxc212cd7d2b7c8: Gained IPv6LL Mar 17 17:51:22.808527 ntpd[1450]: Listen normally on 8 cilium_host 192.168.0.131:123 Mar 17 17:51:22.809607 ntpd[1450]: 17 Mar 17:51:22 ntpd[1450]: Listen normally on 8 cilium_host 192.168.0.131:123 Mar 17 17:51:22.809607 ntpd[1450]: 17 Mar 17:51:22 ntpd[1450]: Listen normally on 9 cilium_net [fe80::acc2:97ff:feea:d7f8%4]:123 Mar 17 17:51:22.809607 ntpd[1450]: 17 Mar 17:51:22 ntpd[1450]: Listen normally on 10 cilium_host [fe80::582b:70ff:fedf:873%5]:123 Mar 17 17:51:22.809607 ntpd[1450]: 17 Mar 17:51:22 ntpd[1450]: Listen normally on 11 cilium_vxlan [fe80::84ff:a5ff:feb8:612b%6]:123 Mar 17 17:51:22.809607 ntpd[1450]: 17 Mar 17:51:22 ntpd[1450]: Listen normally on 12 lxc_health [fe80::bc0b:7eff:febc:a50f%8]:123 Mar 17 17:51:22.809607 ntpd[1450]: 17 Mar 17:51:22 ntpd[1450]: Listen normally on 13 lxcb4e6a77ef4e1 [fe80::d8f1:d8ff:fe70:32a%10]:123 Mar 17 17:51:22.809607 ntpd[1450]: 17 Mar 17:51:22 ntpd[1450]: Listen normally on 14 lxc212cd7d2b7c8 [fe80::14c1:c0ff:fe57:4782%12]:123 Mar 17 17:51:22.808657 ntpd[1450]: Listen normally on 9 cilium_net [fe80::acc2:97ff:feea:d7f8%4]:123 Mar 17 17:51:22.808740 ntpd[1450]: Listen normally on 10 cilium_host [fe80::582b:70ff:fedf:873%5]:123 Mar 17 17:51:22.808831 ntpd[1450]: Listen normally on 11 cilium_vxlan [fe80::84ff:a5ff:feb8:612b%6]:123 Mar 17 17:51:22.808893 ntpd[1450]: Listen normally on 12 lxc_health [fe80::bc0b:7eff:febc:a50f%8]:123 Mar 17 17:51:22.808951 ntpd[1450]: Listen normally on 13 lxcb4e6a77ef4e1 [fe80::d8f1:d8ff:fe70:32a%10]:123 Mar 17 17:51:22.809009 ntpd[1450]: Listen normally on 14 lxc212cd7d2b7c8 [fe80::14c1:c0ff:fe57:4782%12]:123 Mar 17 17:51:23.624860 containerd[1481]: time="2025-03-17T17:51:23.624353229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:51:23.624860 containerd[1481]: time="2025-03-17T17:51:23.624558313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:51:23.624860 containerd[1481]: time="2025-03-17T17:51:23.624641934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:23.628169 containerd[1481]: time="2025-03-17T17:51:23.626861327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:23.680106 containerd[1481]: time="2025-03-17T17:51:23.678088551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:51:23.680106 containerd[1481]: time="2025-03-17T17:51:23.678168307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:51:23.680106 containerd[1481]: time="2025-03-17T17:51:23.678190250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:23.680106 containerd[1481]: time="2025-03-17T17:51:23.678511619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:51:23.708868 systemd[1]: Started cri-containerd-923f48346853029e62aa2fd9b90455b5175afdf3d44b9e30c30318914a1efeda.scope - libcontainer container 923f48346853029e62aa2fd9b90455b5175afdf3d44b9e30c30318914a1efeda. Mar 17 17:51:23.732064 systemd[1]: Started cri-containerd-978c3642457df9604e8c3e125ff1b41c68889a69286610b714c47bf69f39359e.scope - libcontainer container 978c3642457df9604e8c3e125ff1b41c68889a69286610b714c47bf69f39359e. Mar 17 17:51:23.836625 containerd[1481]: time="2025-03-17T17:51:23.836491654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rwtxv,Uid:d094387d-fdb6-472c-9f23-6cbcc8e522f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"978c3642457df9604e8c3e125ff1b41c68889a69286610b714c47bf69f39359e\"" Mar 17 17:51:23.855222 containerd[1481]: time="2025-03-17T17:51:23.854817123Z" level=info msg="CreateContainer within sandbox \"978c3642457df9604e8c3e125ff1b41c68889a69286610b714c47bf69f39359e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:51:23.881705 containerd[1481]: time="2025-03-17T17:51:23.881580952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-swrdq,Uid:6caefc63-a740-4b28-ab1c-b16c5b819455,Namespace:kube-system,Attempt:0,} returns sandbox id \"923f48346853029e62aa2fd9b90455b5175afdf3d44b9e30c30318914a1efeda\"" Mar 17 17:51:23.888698 containerd[1481]: time="2025-03-17T17:51:23.888642372Z" level=info msg="CreateContainer within sandbox \"923f48346853029e62aa2fd9b90455b5175afdf3d44b9e30c30318914a1efeda\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:51:23.891419 containerd[1481]: time="2025-03-17T17:51:23.891316588Z" level=info msg="CreateContainer within sandbox \"978c3642457df9604e8c3e125ff1b41c68889a69286610b714c47bf69f39359e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfa0ffebdf1a750d6eeaafd716f12d68131b6c0aa2ee4bbdbf5b89febaa0a9d1\"" Mar 17 17:51:23.893253 containerd[1481]: time="2025-03-17T17:51:23.892417649Z" level=info msg="StartContainer for \"cfa0ffebdf1a750d6eeaafd716f12d68131b6c0aa2ee4bbdbf5b89febaa0a9d1\"" Mar 17 17:51:23.916390 containerd[1481]: time="2025-03-17T17:51:23.916226795Z" level=info msg="CreateContainer within sandbox \"923f48346853029e62aa2fd9b90455b5175afdf3d44b9e30c30318914a1efeda\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a99eea035306c0601ef8185f42d2c5f5e0b3be94547abf3e5db7342df5e5c4db\"" Mar 17 17:51:23.918435 containerd[1481]: time="2025-03-17T17:51:23.918388339Z" level=info msg="StartContainer for \"a99eea035306c0601ef8185f42d2c5f5e0b3be94547abf3e5db7342df5e5c4db\"" Mar 17 17:51:23.975022 systemd[1]: Started cri-containerd-cfa0ffebdf1a750d6eeaafd716f12d68131b6c0aa2ee4bbdbf5b89febaa0a9d1.scope - libcontainer container cfa0ffebdf1a750d6eeaafd716f12d68131b6c0aa2ee4bbdbf5b89febaa0a9d1. Mar 17 17:51:24.000331 systemd[1]: Started cri-containerd-a99eea035306c0601ef8185f42d2c5f5e0b3be94547abf3e5db7342df5e5c4db.scope - libcontainer container a99eea035306c0601ef8185f42d2c5f5e0b3be94547abf3e5db7342df5e5c4db. Mar 17 17:51:24.113381 containerd[1481]: time="2025-03-17T17:51:24.112016765Z" level=info msg="StartContainer for \"a99eea035306c0601ef8185f42d2c5f5e0b3be94547abf3e5db7342df5e5c4db\" returns successfully" Mar 17 17:51:24.118024 containerd[1481]: time="2025-03-17T17:51:24.117977405Z" level=info msg="StartContainer for \"cfa0ffebdf1a750d6eeaafd716f12d68131b6c0aa2ee4bbdbf5b89febaa0a9d1\" returns successfully" Mar 17 17:51:24.318091 kubelet[2721]: I0317 17:51:24.318017 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-swrdq" podStartSLOduration=25.317988271 podStartE2EDuration="25.317988271s" podCreationTimestamp="2025-03-17 17:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:51:24.317545629 +0000 UTC m=+38.381280484" watchObservedRunningTime="2025-03-17 17:51:24.317988271 +0000 UTC m=+38.381723115" Mar 17 17:51:24.337443 kubelet[2721]: I0317 17:51:24.336807 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rwtxv" podStartSLOduration=25.336758924 podStartE2EDuration="25.336758924s" podCreationTimestamp="2025-03-17 17:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:51:24.336645932 +0000 UTC m=+38.400380782" watchObservedRunningTime="2025-03-17 17:51:24.336758924 +0000 UTC m=+38.400493798" Mar 17 17:51:24.641479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229697838.mount: Deactivated successfully. Mar 17 17:51:49.688260 systemd[1]: Started sshd@9-10.128.0.84:22-139.178.89.65:55536.service - OpenSSH per-connection server daemon (139.178.89.65:55536). Mar 17 17:51:49.990953 sshd[4112]: Accepted publickey for core from 139.178.89.65 port 55536 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:51:49.992758 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:49.999877 systemd-logind[1468]: New session 10 of user core. Mar 17 17:51:50.008019 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:51:50.308415 sshd[4114]: Connection closed by 139.178.89.65 port 55536 Mar 17 17:51:50.309698 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:50.314977 systemd[1]: sshd@9-10.128.0.84:22-139.178.89.65:55536.service: Deactivated successfully. Mar 17 17:51:50.318144 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:51:50.319637 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:51:50.321290 systemd-logind[1468]: Removed session 10. Mar 17 17:51:55.368214 systemd[1]: Started sshd@10-10.128.0.84:22-139.178.89.65:47134.service - OpenSSH per-connection server daemon (139.178.89.65:47134). Mar 17 17:51:55.667225 sshd[4132]: Accepted publickey for core from 139.178.89.65 port 47134 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:51:55.669052 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:51:55.675345 systemd-logind[1468]: New session 11 of user core. Mar 17 17:51:55.683008 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:51:55.959730 sshd[4134]: Connection closed by 139.178.89.65 port 47134 Mar 17 17:51:55.960663 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Mar 17 17:51:55.967142 systemd[1]: sshd@10-10.128.0.84:22-139.178.89.65:47134.service: Deactivated successfully. Mar 17 17:51:55.970153 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:51:55.971263 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:51:55.973153 systemd-logind[1468]: Removed session 11. Mar 17 17:52:01.019240 systemd[1]: Started sshd@11-10.128.0.84:22-139.178.89.65:47138.service - OpenSSH per-connection server daemon (139.178.89.65:47138). Mar 17 17:52:01.319521 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 47138 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:01.321569 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:01.327827 systemd-logind[1468]: New session 12 of user core. Mar 17 17:52:01.332072 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:52:01.613033 sshd[4151]: Connection closed by 139.178.89.65 port 47138 Mar 17 17:52:01.614376 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:01.619609 systemd[1]: sshd@11-10.128.0.84:22-139.178.89.65:47138.service: Deactivated successfully. Mar 17 17:52:01.622683 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:52:01.624070 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:52:01.625743 systemd-logind[1468]: Removed session 12. Mar 17 17:52:06.672187 systemd[1]: Started sshd@12-10.128.0.84:22-139.178.89.65:43840.service - OpenSSH per-connection server daemon (139.178.89.65:43840). Mar 17 17:52:06.967702 sshd[4164]: Accepted publickey for core from 139.178.89.65 port 43840 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:06.969603 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:06.975581 systemd-logind[1468]: New session 13 of user core. Mar 17 17:52:06.980061 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:52:07.266431 sshd[4166]: Connection closed by 139.178.89.65 port 43840 Mar 17 17:52:07.267668 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:07.273469 systemd[1]: sshd@12-10.128.0.84:22-139.178.89.65:43840.service: Deactivated successfully. Mar 17 17:52:07.276937 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:52:07.278366 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:52:07.279955 systemd-logind[1468]: Removed session 13. Mar 17 17:52:09.184997 update_engine[1470]: I20250317 17:52:09.184921 1470 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 17:52:09.184997 update_engine[1470]: I20250317 17:52:09.184986 1470 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 17:52:09.185592 update_engine[1470]: I20250317 17:52:09.185259 1470 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 17:52:09.186032 update_engine[1470]: I20250317 17:52:09.185978 1470 omaha_request_params.cc:62] Current group set to beta Mar 17 17:52:09.186345 update_engine[1470]: I20250317 17:52:09.186149 1470 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 17:52:09.186345 update_engine[1470]: I20250317 17:52:09.186182 1470 update_attempter.cc:643] Scheduling an action processor start. Mar 17 17:52:09.186345 update_engine[1470]: I20250317 17:52:09.186208 1470 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:52:09.186345 update_engine[1470]: I20250317 17:52:09.186262 1470 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 17:52:09.186557 update_engine[1470]: I20250317 17:52:09.186358 1470 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:52:09.186557 update_engine[1470]: I20250317 17:52:09.186374 1470 omaha_request_action.cc:272] Request: Mar 17 17:52:09.186557 update_engine[1470]: Mar 17 17:52:09.186557 update_engine[1470]: Mar 17 17:52:09.186557 update_engine[1470]: Mar 17 17:52:09.186557 update_engine[1470]: Mar 17 17:52:09.186557 update_engine[1470]: Mar 17 17:52:09.186557 update_engine[1470]: Mar 17 17:52:09.186557 update_engine[1470]: Mar 17 17:52:09.186557 update_engine[1470]: Mar 17 17:52:09.186557 update_engine[1470]: I20250317 17:52:09.186388 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:52:09.187376 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 17:52:09.188341 update_engine[1470]: I20250317 17:52:09.188285 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:52:09.188830 update_engine[1470]: I20250317 17:52:09.188735 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:52:09.225726 update_engine[1470]: E20250317 17:52:09.225644 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:52:09.225939 update_engine[1470]: I20250317 17:52:09.225808 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 17:52:12.324530 systemd[1]: Started sshd@13-10.128.0.84:22-139.178.89.65:34156.service - OpenSSH per-connection server daemon (139.178.89.65:34156). Mar 17 17:52:12.630308 sshd[4181]: Accepted publickey for core from 139.178.89.65 port 34156 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:12.632252 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:12.638054 systemd-logind[1468]: New session 14 of user core. Mar 17 17:52:12.643980 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:52:12.919616 sshd[4183]: Connection closed by 139.178.89.65 port 34156 Mar 17 17:52:12.920533 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:12.926090 systemd[1]: sshd@13-10.128.0.84:22-139.178.89.65:34156.service: Deactivated successfully. Mar 17 17:52:12.929635 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:52:12.930976 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:52:12.932555 systemd-logind[1468]: Removed session 14. Mar 17 17:52:12.981312 systemd[1]: Started sshd@14-10.128.0.84:22-139.178.89.65:34162.service - OpenSSH per-connection server daemon (139.178.89.65:34162). Mar 17 17:52:13.293503 sshd[4196]: Accepted publickey for core from 139.178.89.65 port 34162 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:13.295480 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:13.302758 systemd-logind[1468]: New session 15 of user core. Mar 17 17:52:13.311016 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:52:13.627086 sshd[4198]: Connection closed by 139.178.89.65 port 34162 Mar 17 17:52:13.628128 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:13.634144 systemd[1]: sshd@14-10.128.0.84:22-139.178.89.65:34162.service: Deactivated successfully. Mar 17 17:52:13.637119 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:52:13.638410 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:52:13.640034 systemd-logind[1468]: Removed session 15. Mar 17 17:52:13.684172 systemd[1]: Started sshd@15-10.128.0.84:22-139.178.89.65:34172.service - OpenSSH per-connection server daemon (139.178.89.65:34172). Mar 17 17:52:13.977835 sshd[4208]: Accepted publickey for core from 139.178.89.65 port 34172 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:13.979607 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:13.986891 systemd-logind[1468]: New session 16 of user core. Mar 17 17:52:13.992000 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:52:14.270976 sshd[4211]: Connection closed by 139.178.89.65 port 34172 Mar 17 17:52:14.272239 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:14.276685 systemd[1]: sshd@15-10.128.0.84:22-139.178.89.65:34172.service: Deactivated successfully. Mar 17 17:52:14.280034 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:52:14.282628 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:52:14.284609 systemd-logind[1468]: Removed session 16. Mar 17 17:52:19.189454 update_engine[1470]: I20250317 17:52:19.189350 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:52:19.190011 update_engine[1470]: I20250317 17:52:19.189723 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:52:19.190163 update_engine[1470]: I20250317 17:52:19.190109 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:52:19.202056 update_engine[1470]: E20250317 17:52:19.201973 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:52:19.202056 update_engine[1470]: I20250317 17:52:19.202082 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 17:52:19.330260 systemd[1]: Started sshd@16-10.128.0.84:22-139.178.89.65:34174.service - OpenSSH per-connection server daemon (139.178.89.65:34174). Mar 17 17:52:19.625964 sshd[4223]: Accepted publickey for core from 139.178.89.65 port 34174 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:19.628137 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:19.634882 systemd-logind[1468]: New session 17 of user core. Mar 17 17:52:19.641000 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:52:19.920277 sshd[4225]: Connection closed by 139.178.89.65 port 34174 Mar 17 17:52:19.921112 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:19.925941 systemd[1]: sshd@16-10.128.0.84:22-139.178.89.65:34174.service: Deactivated successfully. Mar 17 17:52:19.929082 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:52:19.931458 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:52:19.933011 systemd-logind[1468]: Removed session 17. Mar 17 17:52:24.982218 systemd[1]: Started sshd@17-10.128.0.84:22-139.178.89.65:44260.service - OpenSSH per-connection server daemon (139.178.89.65:44260). Mar 17 17:52:25.274958 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 44260 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:25.276597 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:25.282713 systemd-logind[1468]: New session 18 of user core. Mar 17 17:52:25.288004 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:52:25.564738 sshd[4239]: Connection closed by 139.178.89.65 port 44260 Mar 17 17:52:25.566031 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:25.571139 systemd[1]: sshd@17-10.128.0.84:22-139.178.89.65:44260.service: Deactivated successfully. Mar 17 17:52:25.574013 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:52:25.575387 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:52:25.576878 systemd-logind[1468]: Removed session 18. Mar 17 17:52:25.628234 systemd[1]: Started sshd@18-10.128.0.84:22-139.178.89.65:44266.service - OpenSSH per-connection server daemon (139.178.89.65:44266). Mar 17 17:52:25.917688 sshd[4250]: Accepted publickey for core from 139.178.89.65 port 44266 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:25.919550 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:25.925459 systemd-logind[1468]: New session 19 of user core. Mar 17 17:52:25.930977 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:52:26.289188 sshd[4252]: Connection closed by 139.178.89.65 port 44266 Mar 17 17:52:26.290245 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:26.295918 systemd[1]: sshd@18-10.128.0.84:22-139.178.89.65:44266.service: Deactivated successfully. Mar 17 17:52:26.299117 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:52:26.300362 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:52:26.302022 systemd-logind[1468]: Removed session 19. Mar 17 17:52:26.349258 systemd[1]: Started sshd@19-10.128.0.84:22-139.178.89.65:44270.service - OpenSSH per-connection server daemon (139.178.89.65:44270). Mar 17 17:52:26.647128 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 44270 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:26.649056 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:26.655650 systemd-logind[1468]: New session 20 of user core. Mar 17 17:52:26.662031 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:52:28.480423 sshd[4264]: Connection closed by 139.178.89.65 port 44270 Mar 17 17:52:28.483241 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:28.488390 systemd[1]: sshd@19-10.128.0.84:22-139.178.89.65:44270.service: Deactivated successfully. Mar 17 17:52:28.492369 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:52:28.495182 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:52:28.497721 systemd-logind[1468]: Removed session 20. Mar 17 17:52:28.538207 systemd[1]: Started sshd@20-10.128.0.84:22-139.178.89.65:44276.service - OpenSSH per-connection server daemon (139.178.89.65:44276). Mar 17 17:52:28.832428 sshd[4281]: Accepted publickey for core from 139.178.89.65 port 44276 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:28.833943 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:28.841025 systemd-logind[1468]: New session 21 of user core. Mar 17 17:52:28.847197 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:52:29.184767 update_engine[1470]: I20250317 17:52:29.184670 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:52:29.185354 update_engine[1470]: I20250317 17:52:29.185087 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:52:29.185474 update_engine[1470]: I20250317 17:52:29.185422 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:52:29.197717 update_engine[1470]: E20250317 17:52:29.197549 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:52:29.197717 update_engine[1470]: I20250317 17:52:29.197670 1470 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 17:52:29.275818 sshd[4283]: Connection closed by 139.178.89.65 port 44276 Mar 17 17:52:29.277072 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:29.282860 systemd[1]: sshd@20-10.128.0.84:22-139.178.89.65:44276.service: Deactivated successfully. Mar 17 17:52:29.286327 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:52:29.287548 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:52:29.289184 systemd-logind[1468]: Removed session 21. Mar 17 17:52:29.334210 systemd[1]: Started sshd@21-10.128.0.84:22-139.178.89.65:44292.service - OpenSSH per-connection server daemon (139.178.89.65:44292). Mar 17 17:52:29.636009 sshd[4293]: Accepted publickey for core from 139.178.89.65 port 44292 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:29.637949 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:29.644152 systemd-logind[1468]: New session 22 of user core. Mar 17 17:52:29.649978 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:52:29.926284 sshd[4295]: Connection closed by 139.178.89.65 port 44292 Mar 17 17:52:29.927204 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:29.932661 systemd[1]: sshd@21-10.128.0.84:22-139.178.89.65:44292.service: Deactivated successfully. Mar 17 17:52:29.935755 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:52:29.937178 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:52:29.938913 systemd-logind[1468]: Removed session 22. Mar 17 17:52:34.984207 systemd[1]: Started sshd@22-10.128.0.84:22-139.178.89.65:50660.service - OpenSSH per-connection server daemon (139.178.89.65:50660). Mar 17 17:52:35.282598 sshd[4311]: Accepted publickey for core from 139.178.89.65 port 50660 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:35.284742 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:35.292437 systemd-logind[1468]: New session 23 of user core. Mar 17 17:52:35.298007 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:52:35.572072 sshd[4313]: Connection closed by 139.178.89.65 port 50660 Mar 17 17:52:35.573860 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:35.579240 systemd[1]: sshd@22-10.128.0.84:22-139.178.89.65:50660.service: Deactivated successfully. Mar 17 17:52:35.582505 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:52:35.583596 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:52:35.585310 systemd-logind[1468]: Removed session 23. Mar 17 17:52:39.186947 update_engine[1470]: I20250317 17:52:39.186846 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:52:39.187532 update_engine[1470]: I20250317 17:52:39.187217 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:52:39.187618 update_engine[1470]: I20250317 17:52:39.187578 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:52:39.280969 update_engine[1470]: E20250317 17:52:39.280879 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:52:39.281151 update_engine[1470]: I20250317 17:52:39.281010 1470 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:52:39.281151 update_engine[1470]: I20250317 17:52:39.281030 1470 omaha_request_action.cc:617] Omaha request response: Mar 17 17:52:39.281151 update_engine[1470]: E20250317 17:52:39.281142 1470 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 17:52:39.281339 update_engine[1470]: I20250317 17:52:39.281181 1470 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 17:52:39.281339 update_engine[1470]: I20250317 17:52:39.281194 1470 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:52:39.281339 update_engine[1470]: I20250317 17:52:39.281205 1470 update_attempter.cc:306] Processing Done. Mar 17 17:52:39.281339 update_engine[1470]: E20250317 17:52:39.281229 1470 update_attempter.cc:619] Update failed. Mar 17 17:52:39.281339 update_engine[1470]: I20250317 17:52:39.281240 1470 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 17:52:39.281339 update_engine[1470]: I20250317 17:52:39.281251 1470 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 17:52:39.281339 update_engine[1470]: I20250317 17:52:39.281262 1470 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 17:52:39.281656 update_engine[1470]: I20250317 17:52:39.281375 1470 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:52:39.281656 update_engine[1470]: I20250317 17:52:39.281414 1470 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:52:39.281656 update_engine[1470]: I20250317 17:52:39.281425 1470 omaha_request_action.cc:272] Request: Mar 17 17:52:39.281656 update_engine[1470]: Mar 17 17:52:39.281656 update_engine[1470]: Mar 17 17:52:39.281656 update_engine[1470]: Mar 17 17:52:39.281656 update_engine[1470]: Mar 17 17:52:39.281656 update_engine[1470]: Mar 17 17:52:39.281656 update_engine[1470]: Mar 17 17:52:39.281656 update_engine[1470]: I20250317 17:52:39.281437 1470 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:52:39.282410 update_engine[1470]: I20250317 17:52:39.281697 1470 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:52:39.282410 update_engine[1470]: I20250317 17:52:39.282299 1470 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:52:39.282492 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 17:52:39.293854 update_engine[1470]: E20250317 17:52:39.293769 1470 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:52:39.294006 update_engine[1470]: I20250317 17:52:39.293911 1470 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:52:39.294006 update_engine[1470]: I20250317 17:52:39.293933 1470 omaha_request_action.cc:617] Omaha request response: Mar 17 17:52:39.294006 update_engine[1470]: I20250317 17:52:39.293947 1470 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:52:39.294006 update_engine[1470]: I20250317 17:52:39.293959 1470 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:52:39.294006 update_engine[1470]: I20250317 17:52:39.293970 1470 update_attempter.cc:306] Processing Done. Mar 17 17:52:39.294006 update_engine[1470]: I20250317 17:52:39.293982 1470 update_attempter.cc:310] Error event sent. Mar 17 17:52:39.294006 update_engine[1470]: I20250317 17:52:39.293998 1470 update_check_scheduler.cc:74] Next update check in 46m35s Mar 17 17:52:39.294474 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 17:52:40.632219 systemd[1]: Started sshd@23-10.128.0.84:22-139.178.89.65:50662.service - OpenSSH per-connection server daemon (139.178.89.65:50662). Mar 17 17:52:40.929637 sshd[4325]: Accepted publickey for core from 139.178.89.65 port 50662 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:40.931458 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:40.938430 systemd-logind[1468]: New session 24 of user core. Mar 17 17:52:40.945994 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:52:41.222758 sshd[4327]: Connection closed by 139.178.89.65 port 50662 Mar 17 17:52:41.224057 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:41.228860 systemd[1]: sshd@23-10.128.0.84:22-139.178.89.65:50662.service: Deactivated successfully. Mar 17 17:52:41.231582 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:52:41.233633 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:52:41.235442 systemd-logind[1468]: Removed session 24. Mar 17 17:52:46.288305 systemd[1]: Started sshd@24-10.128.0.84:22-139.178.89.65:54142.service - OpenSSH per-connection server daemon (139.178.89.65:54142). Mar 17 17:52:46.584313 sshd[4341]: Accepted publickey for core from 139.178.89.65 port 54142 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:46.588302 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:46.602962 systemd-logind[1468]: New session 25 of user core. Mar 17 17:52:46.608025 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:52:46.868876 sshd[4343]: Connection closed by 139.178.89.65 port 54142 Mar 17 17:52:46.869670 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:46.875251 systemd[1]: sshd@24-10.128.0.84:22-139.178.89.65:54142.service: Deactivated successfully. Mar 17 17:52:46.878388 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:52:46.879704 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:52:46.881572 systemd-logind[1468]: Removed session 25. Mar 17 17:52:46.932225 systemd[1]: Started sshd@25-10.128.0.84:22-139.178.89.65:54146.service - OpenSSH per-connection server daemon (139.178.89.65:54146). Mar 17 17:52:47.232537 sshd[4355]: Accepted publickey for core from 139.178.89.65 port 54146 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:47.234414 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:47.241747 systemd-logind[1468]: New session 26 of user core. Mar 17 17:52:47.246023 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:52:49.355215 containerd[1481]: time="2025-03-17T17:52:49.355122490Z" level=info msg="StopContainer for \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\" with timeout 30 (s)" Mar 17 17:52:49.356244 containerd[1481]: time="2025-03-17T17:52:49.356036174Z" level=info msg="Stop container \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\" with signal terminated" Mar 17 17:52:49.381429 systemd[1]: cri-containerd-96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc.scope: Deactivated successfully. Mar 17 17:52:49.389621 containerd[1481]: time="2025-03-17T17:52:49.389566206Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:52:49.404020 containerd[1481]: time="2025-03-17T17:52:49.403870321Z" level=info msg="StopContainer for \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\" with timeout 2 (s)" Mar 17 17:52:49.409808 containerd[1481]: time="2025-03-17T17:52:49.407986644Z" level=info msg="Stop container \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\" with signal terminated" Mar 17 17:52:49.430914 systemd-networkd[1390]: lxc_health: Link DOWN Mar 17 17:52:49.431607 systemd-networkd[1390]: lxc_health: Lost carrier Mar 17 17:52:49.448736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc-rootfs.mount: Deactivated successfully. Mar 17 17:52:49.453238 systemd[1]: cri-containerd-b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28.scope: Deactivated successfully. Mar 17 17:52:49.453887 systemd[1]: cri-containerd-b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28.scope: Consumed 9.534s CPU time, 127.6M memory peak, 144K read from disk, 13.3M written to disk. Mar 17 17:52:49.471279 containerd[1481]: time="2025-03-17T17:52:49.470521277Z" level=info msg="shim disconnected" id=96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc namespace=k8s.io Mar 17 17:52:49.471279 containerd[1481]: time="2025-03-17T17:52:49.470620241Z" level=warning msg="cleaning up after shim disconnected" id=96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc namespace=k8s.io Mar 17 17:52:49.471279 containerd[1481]: time="2025-03-17T17:52:49.470635328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:49.504697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28-rootfs.mount: Deactivated successfully. Mar 17 17:52:49.509374 containerd[1481]: time="2025-03-17T17:52:49.509322665Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:52:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:52:49.510864 containerd[1481]: time="2025-03-17T17:52:49.510760369Z" level=info msg="shim disconnected" id=b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28 namespace=k8s.io Mar 17 17:52:49.510970 containerd[1481]: time="2025-03-17T17:52:49.510865111Z" level=warning msg="cleaning up after shim disconnected" id=b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28 namespace=k8s.io Mar 17 17:52:49.510970 containerd[1481]: time="2025-03-17T17:52:49.510880806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:49.514580 containerd[1481]: time="2025-03-17T17:52:49.514423202Z" level=info msg="StopContainer for \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\" returns successfully" Mar 17 17:52:49.515580 containerd[1481]: time="2025-03-17T17:52:49.515412353Z" level=info msg="StopPodSandbox for \"1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5\"" Mar 17 17:52:49.515580 containerd[1481]: time="2025-03-17T17:52:49.515473346Z" level=info msg="Container to stop \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:52:49.524191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5-shm.mount: Deactivated successfully. Mar 17 17:52:49.535950 systemd[1]: cri-containerd-1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5.scope: Deactivated successfully. Mar 17 17:52:49.545359 containerd[1481]: time="2025-03-17T17:52:49.545287974Z" level=info msg="StopContainer for \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\" returns successfully" Mar 17 17:52:49.547471 containerd[1481]: time="2025-03-17T17:52:49.547349028Z" level=info msg="StopPodSandbox for \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\"" Mar 17 17:52:49.547471 containerd[1481]: time="2025-03-17T17:52:49.547401169Z" level=info msg="Container to stop \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:52:49.547471 containerd[1481]: time="2025-03-17T17:52:49.547454342Z" level=info msg="Container to stop \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:52:49.547471 containerd[1481]: time="2025-03-17T17:52:49.547470566Z" level=info msg="Container to stop \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:52:49.550191 containerd[1481]: time="2025-03-17T17:52:49.547485782Z" level=info msg="Container to stop \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:52:49.550191 containerd[1481]: time="2025-03-17T17:52:49.547499577Z" level=info msg="Container to stop \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:52:49.553578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c-shm.mount: Deactivated successfully. Mar 17 17:52:49.564167 systemd[1]: cri-containerd-23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c.scope: Deactivated successfully. Mar 17 17:52:49.592035 containerd[1481]: time="2025-03-17T17:52:49.591873690Z" level=info msg="shim disconnected" id=1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5 namespace=k8s.io Mar 17 17:52:49.592035 containerd[1481]: time="2025-03-17T17:52:49.591969336Z" level=warning msg="cleaning up after shim disconnected" id=1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5 namespace=k8s.io Mar 17 17:52:49.592035 containerd[1481]: time="2025-03-17T17:52:49.591986652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:49.614216 containerd[1481]: time="2025-03-17T17:52:49.613902247Z" level=info msg="shim disconnected" id=23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c namespace=k8s.io Mar 17 17:52:49.614216 containerd[1481]: time="2025-03-17T17:52:49.613991837Z" level=warning msg="cleaning up after shim disconnected" id=23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c namespace=k8s.io Mar 17 17:52:49.614216 containerd[1481]: time="2025-03-17T17:52:49.614007171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:49.618807 containerd[1481]: time="2025-03-17T17:52:49.617741294Z" level=info msg="TearDown network for sandbox \"1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5\" successfully" Mar 17 17:52:49.618807 containerd[1481]: time="2025-03-17T17:52:49.617793251Z" level=info msg="StopPodSandbox for \"1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5\" returns successfully" Mar 17 17:52:49.640875 containerd[1481]: time="2025-03-17T17:52:49.640814489Z" level=info msg="TearDown network for sandbox \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" successfully" Mar 17 17:52:49.640875 containerd[1481]: time="2025-03-17T17:52:49.640869155Z" level=info msg="StopPodSandbox for \"23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c\" returns successfully" Mar 17 17:52:49.712301 kubelet[2721]: I0317 17:52:49.712240 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-cgroup\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.712301 kubelet[2721]: I0317 17:52:49.712307 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f17b226-cf4f-474d-8289-5da60396aaab-clustermesh-secrets\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713001 kubelet[2721]: I0317 17:52:49.712336 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-lib-modules\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713001 kubelet[2721]: I0317 17:52:49.712365 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-host-proc-sys-kernel\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713001 kubelet[2721]: I0317 17:52:49.712386 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-host-proc-sys-net\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713001 kubelet[2721]: I0317 17:52:49.712412 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-run\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713001 kubelet[2721]: I0317 17:52:49.712438 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-hostproc\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713001 kubelet[2721]: I0317 17:52:49.712477 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48zf8\" (UniqueName: \"kubernetes.io/projected/3e080c2d-91f0-4acf-ade8-628bce5b23f8-kube-api-access-48zf8\") pod \"3e080c2d-91f0-4acf-ade8-628bce5b23f8\" (UID: \"3e080c2d-91f0-4acf-ade8-628bce5b23f8\") " Mar 17 17:52:49.713302 kubelet[2721]: I0317 17:52:49.712511 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e080c2d-91f0-4acf-ade8-628bce5b23f8-cilium-config-path\") pod \"3e080c2d-91f0-4acf-ade8-628bce5b23f8\" (UID: \"3e080c2d-91f0-4acf-ade8-628bce5b23f8\") " Mar 17 17:52:49.713302 kubelet[2721]: I0317 17:52:49.712536 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cni-path\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713302 kubelet[2721]: I0317 17:52:49.712563 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-hubble-tls\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713302 kubelet[2721]: I0317 17:52:49.712588 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-xtables-lock\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713302 kubelet[2721]: I0317 17:52:49.712617 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-config-path\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713302 kubelet[2721]: I0317 17:52:49.712644 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-bpf-maps\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713588 kubelet[2721]: I0317 17:52:49.712674 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4ddk\" (UniqueName: \"kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-kube-api-access-j4ddk\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713588 kubelet[2721]: I0317 17:52:49.712698 2721 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-etc-cni-netd\") pod \"4f17b226-cf4f-474d-8289-5da60396aaab\" (UID: \"4f17b226-cf4f-474d-8289-5da60396aaab\") " Mar 17 17:52:49.713588 kubelet[2721]: I0317 17:52:49.712844 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.713588 kubelet[2721]: I0317 17:52:49.712917 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.717271 kubelet[2721]: I0317 17:52:49.715971 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cni-path" (OuterVolumeSpecName: "cni-path") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.717271 kubelet[2721]: I0317 17:52:49.716872 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.717271 kubelet[2721]: I0317 17:52:49.716940 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.717271 kubelet[2721]: I0317 17:52:49.716971 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.717271 kubelet[2721]: I0317 17:52:49.716996 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.717608 kubelet[2721]: I0317 17:52:49.717020 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-hostproc" (OuterVolumeSpecName: "hostproc") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.718203 kubelet[2721]: I0317 17:52:49.717903 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.719355 kubelet[2721]: I0317 17:52:49.719232 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:52:49.720588 kubelet[2721]: I0317 17:52:49.720550 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e080c2d-91f0-4acf-ade8-628bce5b23f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e080c2d-91f0-4acf-ade8-628bce5b23f8" (UID: "3e080c2d-91f0-4acf-ade8-628bce5b23f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:52:49.722129 kubelet[2721]: I0317 17:52:49.722094 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f17b226-cf4f-474d-8289-5da60396aaab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:52:49.724448 kubelet[2721]: I0317 17:52:49.724368 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:52:49.727009 kubelet[2721]: I0317 17:52:49.726976 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e080c2d-91f0-4acf-ade8-628bce5b23f8-kube-api-access-48zf8" (OuterVolumeSpecName: "kube-api-access-48zf8") pod "3e080c2d-91f0-4acf-ade8-628bce5b23f8" (UID: "3e080c2d-91f0-4acf-ade8-628bce5b23f8"). InnerVolumeSpecName "kube-api-access-48zf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:52:49.727190 kubelet[2721]: I0317 17:52:49.727075 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:52:49.728524 kubelet[2721]: I0317 17:52:49.728468 2721 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-kube-api-access-j4ddk" (OuterVolumeSpecName: "kube-api-access-j4ddk") pod "4f17b226-cf4f-474d-8289-5da60396aaab" (UID: "4f17b226-cf4f-474d-8289-5da60396aaab"). InnerVolumeSpecName "kube-api-access-j4ddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:52:49.814008 kubelet[2721]: I0317 17:52:49.813944 2721 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-hostproc\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814008 kubelet[2721]: I0317 17:52:49.814000 2721 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-48zf8\" (UniqueName: \"kubernetes.io/projected/3e080c2d-91f0-4acf-ade8-628bce5b23f8-kube-api-access-48zf8\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814292 kubelet[2721]: I0317 17:52:49.814023 2721 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e080c2d-91f0-4acf-ade8-628bce5b23f8-cilium-config-path\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814292 kubelet[2721]: I0317 17:52:49.814040 2721 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cni-path\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814292 kubelet[2721]: I0317 17:52:49.814054 2721 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-hubble-tls\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814292 kubelet[2721]: I0317 17:52:49.814070 2721 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-xtables-lock\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814292 kubelet[2721]: I0317 17:52:49.814103 2721 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-config-path\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814292 kubelet[2721]: I0317 17:52:49.814119 2721 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-bpf-maps\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814292 kubelet[2721]: I0317 17:52:49.814133 2721 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j4ddk\" (UniqueName: \"kubernetes.io/projected/4f17b226-cf4f-474d-8289-5da60396aaab-kube-api-access-j4ddk\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814513 kubelet[2721]: I0317 17:52:49.814148 2721 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-etc-cni-netd\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814513 kubelet[2721]: I0317 17:52:49.814162 2721 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-cgroup\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814513 kubelet[2721]: I0317 17:52:49.814177 2721 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-host-proc-sys-kernel\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814513 kubelet[2721]: I0317 17:52:49.814196 2721 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-host-proc-sys-net\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814513 kubelet[2721]: I0317 17:52:49.814213 2721 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f17b226-cf4f-474d-8289-5da60396aaab-clustermesh-secrets\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814513 kubelet[2721]: I0317 17:52:49.814231 2721 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-lib-modules\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:49.814513 kubelet[2721]: I0317 17:52:49.814247 2721 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f17b226-cf4f-474d-8289-5da60396aaab-cilium-run\") on node \"ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal\" DevicePath \"\"" Mar 17 17:52:50.152231 systemd[1]: Removed slice kubepods-besteffort-pod3e080c2d_91f0_4acf_ade8_628bce5b23f8.slice - libcontainer container kubepods-besteffort-pod3e080c2d_91f0_4acf_ade8_628bce5b23f8.slice. Mar 17 17:52:50.154496 systemd[1]: Removed slice kubepods-burstable-pod4f17b226_cf4f_474d_8289_5da60396aaab.slice - libcontainer container kubepods-burstable-pod4f17b226_cf4f_474d_8289_5da60396aaab.slice. Mar 17 17:52:50.154695 systemd[1]: kubepods-burstable-pod4f17b226_cf4f_474d_8289_5da60396aaab.slice: Consumed 9.669s CPU time, 128.1M memory peak, 144K read from disk, 13.3M written to disk. Mar 17 17:52:50.370250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c9485b82c3226f46c4e162c97add5503adf02ea0019877252a2a2d3852edcc5-rootfs.mount: Deactivated successfully. Mar 17 17:52:50.370408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23ae3e5beb2c38972b75d2a6e5635521a373aa009c4075fedd671d6f6140075c-rootfs.mount: Deactivated successfully. Mar 17 17:52:50.370515 systemd[1]: var-lib-kubelet-pods-3e080c2d\x2d91f0\x2d4acf\x2dade8\x2d628bce5b23f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48zf8.mount: Deactivated successfully. Mar 17 17:52:50.370630 systemd[1]: var-lib-kubelet-pods-4f17b226\x2dcf4f\x2d474d\x2d8289\x2d5da60396aaab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj4ddk.mount: Deactivated successfully. Mar 17 17:52:50.370750 systemd[1]: var-lib-kubelet-pods-4f17b226\x2dcf4f\x2d474d\x2d8289\x2d5da60396aaab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:52:50.370872 systemd[1]: var-lib-kubelet-pods-4f17b226\x2dcf4f\x2d474d\x2d8289\x2d5da60396aaab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:52:50.512266 kubelet[2721]: I0317 17:52:50.509716 2721 scope.go:117] "RemoveContainer" containerID="96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc" Mar 17 17:52:50.512393 containerd[1481]: time="2025-03-17T17:52:50.511902549Z" level=info msg="RemoveContainer for \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\"" Mar 17 17:52:50.520953 containerd[1481]: time="2025-03-17T17:52:50.520223801Z" level=info msg="RemoveContainer for \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\" returns successfully" Mar 17 17:52:50.521118 kubelet[2721]: I0317 17:52:50.520548 2721 scope.go:117] "RemoveContainer" containerID="96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc" Mar 17 17:52:50.521686 containerd[1481]: time="2025-03-17T17:52:50.521627176Z" level=error msg="ContainerStatus for \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\": not found" Mar 17 17:52:50.523013 kubelet[2721]: E0317 17:52:50.522165 2721 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\": not found" containerID="96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc" Mar 17 17:52:50.523013 kubelet[2721]: I0317 17:52:50.522219 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc"} err="failed to get container status \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"96a4974c5ed64bd8b808f9eb6bd2f0473f8c4c7ecc278ab1ca61930b9e43d9fc\": not found" Mar 17 17:52:50.523013 kubelet[2721]: I0317 17:52:50.522339 2721 scope.go:117] "RemoveContainer" containerID="b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28" Mar 17 17:52:50.524725 containerd[1481]: time="2025-03-17T17:52:50.524540337Z" level=info msg="RemoveContainer for \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\"" Mar 17 17:52:50.530310 containerd[1481]: time="2025-03-17T17:52:50.530264853Z" level=info msg="RemoveContainer for \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\" returns successfully" Mar 17 17:52:50.530502 kubelet[2721]: I0317 17:52:50.530474 2721 scope.go:117] "RemoveContainer" containerID="bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc" Mar 17 17:52:50.532142 containerd[1481]: time="2025-03-17T17:52:50.532078653Z" level=info msg="RemoveContainer for \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\"" Mar 17 17:52:50.537463 containerd[1481]: time="2025-03-17T17:52:50.537416698Z" level=info msg="RemoveContainer for \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\" returns successfully" Mar 17 17:52:50.538572 kubelet[2721]: I0317 17:52:50.537844 2721 scope.go:117] "RemoveContainer" containerID="0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174" Mar 17 17:52:50.539411 containerd[1481]: time="2025-03-17T17:52:50.539380357Z" level=info msg="RemoveContainer for \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\"" Mar 17 17:52:50.543601 containerd[1481]: time="2025-03-17T17:52:50.543560574Z" level=info msg="RemoveContainer for \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\" returns successfully" Mar 17 17:52:50.543825 kubelet[2721]: I0317 17:52:50.543757 2721 scope.go:117] "RemoveContainer" containerID="0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e" Mar 17 17:52:50.546509 containerd[1481]: time="2025-03-17T17:52:50.546431693Z" level=info msg="RemoveContainer for \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\"" Mar 17 17:52:50.551488 containerd[1481]: time="2025-03-17T17:52:50.551426792Z" level=info msg="RemoveContainer for \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\" returns successfully" Mar 17 17:52:50.552315 kubelet[2721]: I0317 17:52:50.552290 2721 scope.go:117] "RemoveContainer" containerID="92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f" Mar 17 17:52:50.556112 containerd[1481]: time="2025-03-17T17:52:50.556082767Z" level=info msg="RemoveContainer for \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\"" Mar 17 17:52:50.561682 containerd[1481]: time="2025-03-17T17:52:50.560634947Z" level=info msg="RemoveContainer for \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\" returns successfully" Mar 17 17:52:50.562751 kubelet[2721]: I0317 17:52:50.562712 2721 scope.go:117] "RemoveContainer" containerID="b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28" Mar 17 17:52:50.563020 containerd[1481]: time="2025-03-17T17:52:50.562975943Z" level=error msg="ContainerStatus for \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\": not found" Mar 17 17:52:50.563203 kubelet[2721]: E0317 17:52:50.563173 2721 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\": not found" containerID="b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28" Mar 17 17:52:50.563282 kubelet[2721]: I0317 17:52:50.563225 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28"} err="failed to get container status \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5a5fae43c9e94613f3a442791d056537af7368f2f4777ce3db8ef5e52ff2f28\": not found" Mar 17 17:52:50.563282 kubelet[2721]: I0317 17:52:50.563259 2721 scope.go:117] "RemoveContainer" containerID="bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc" Mar 17 17:52:50.563639 containerd[1481]: time="2025-03-17T17:52:50.563601110Z" level=error msg="ContainerStatus for \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\": not found" Mar 17 17:52:50.564033 kubelet[2721]: E0317 17:52:50.564004 2721 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\": not found" containerID="bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc" Mar 17 17:52:50.564114 kubelet[2721]: I0317 17:52:50.564040 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc"} err="failed to get container status \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcc078cfd8caab238b82a580dd0cc25be6d2ac9517abcca139d02d0da7f543bc\": not found" Mar 17 17:52:50.564114 kubelet[2721]: I0317 17:52:50.564075 2721 scope.go:117] "RemoveContainer" containerID="0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174" Mar 17 17:52:50.564461 containerd[1481]: time="2025-03-17T17:52:50.564275631Z" level=error msg="ContainerStatus for \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\": not found" Mar 17 17:52:50.564743 kubelet[2721]: E0317 17:52:50.564449 2721 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\": not found" containerID="0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174" Mar 17 17:52:50.564743 kubelet[2721]: I0317 17:52:50.564479 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174"} err="failed to get container status \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\": rpc error: code = NotFound desc = an error occurred when try to find container \"0842da6f1d69d2fc51b19fcaec90b4a6edbcd45dbca11fb0ba26b38c31b4a174\": not found" Mar 17 17:52:50.564743 kubelet[2721]: I0317 17:52:50.564507 2721 scope.go:117] "RemoveContainer" containerID="0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e" Mar 17 17:52:50.566095 containerd[1481]: time="2025-03-17T17:52:50.565575018Z" level=error msg="ContainerStatus for \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\": not found" Mar 17 17:52:50.566179 kubelet[2721]: E0317 17:52:50.565745 2721 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\": not found" containerID="0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e" Mar 17 17:52:50.566179 kubelet[2721]: I0317 17:52:50.565801 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e"} err="failed to get container status \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b17ad015859cc0d9958028caa495683ef97ec98e524b8b259aaf4c867812f4e\": not found" Mar 17 17:52:50.566179 kubelet[2721]: I0317 17:52:50.565837 2721 scope.go:117] "RemoveContainer" containerID="92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f" Mar 17 17:52:50.566516 containerd[1481]: time="2025-03-17T17:52:50.566464223Z" level=error msg="ContainerStatus for \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\": not found" Mar 17 17:52:50.566849 kubelet[2721]: E0317 17:52:50.566717 2721 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\": not found" containerID="92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f" Mar 17 17:52:50.566849 kubelet[2721]: I0317 17:52:50.566804 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f"} err="failed to get container status \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\": rpc error: code = NotFound desc = an error occurred when try to find container \"92999ce6cc43c44774be25a9953059043abaaaf98d48c3f1eb067858ba69372f\": not found" Mar 17 17:52:51.292613 kubelet[2721]: E0317 17:52:51.292555 2721 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:52:51.336316 sshd[4357]: Connection closed by 139.178.89.65 port 54146 Mar 17 17:52:51.337287 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:51.343160 systemd[1]: sshd@25-10.128.0.84:22-139.178.89.65:54146.service: Deactivated successfully. Mar 17 17:52:51.346575 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:52:51.347122 systemd[1]: session-26.scope: Consumed 1.347s CPU time, 27.2M memory peak. Mar 17 17:52:51.347947 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:52:51.349930 systemd-logind[1468]: Removed session 26. Mar 17 17:52:51.393229 systemd[1]: Started sshd@26-10.128.0.84:22-139.178.89.65:39540.service - OpenSSH per-connection server daemon (139.178.89.65:39540). Mar 17 17:52:51.687652 sshd[4518]: Accepted publickey for core from 139.178.89.65 port 39540 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:51.689977 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:51.700800 systemd-logind[1468]: New session 27 of user core. Mar 17 17:52:51.705991 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:52:51.808321 ntpd[1450]: Deleting interface #12 lxc_health, fe80::bc0b:7eff:febc:a50f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=89 secs Mar 17 17:52:51.808886 ntpd[1450]: 17 Mar 17:52:51 ntpd[1450]: Deleting interface #12 lxc_health, fe80::bc0b:7eff:febc:a50f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=89 secs Mar 17 17:52:52.144815 kubelet[2721]: E0317 17:52:52.144064 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-swrdq" podUID="6caefc63-a740-4b28-ab1c-b16c5b819455" Mar 17 17:52:52.150709 kubelet[2721]: I0317 17:52:52.149509 2721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e080c2d-91f0-4acf-ade8-628bce5b23f8" path="/var/lib/kubelet/pods/3e080c2d-91f0-4acf-ade8-628bce5b23f8/volumes" Mar 17 17:52:52.152464 kubelet[2721]: I0317 17:52:52.151678 2721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f17b226-cf4f-474d-8289-5da60396aaab" path="/var/lib/kubelet/pods/4f17b226-cf4f-474d-8289-5da60396aaab/volumes" Mar 17 17:52:52.545562 sshd[4520]: Connection closed by 139.178.89.65 port 39540 Mar 17 17:52:52.546705 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:52.552703 kubelet[2721]: I0317 17:52:52.551768 2721 topology_manager.go:215] "Topology Admit Handler" podUID="05a38964-d883-418f-9fda-d743e0b9c956" podNamespace="kube-system" podName="cilium-r7sng" Mar 17 17:52:52.552703 kubelet[2721]: E0317 17:52:52.551876 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e080c2d-91f0-4acf-ade8-628bce5b23f8" containerName="cilium-operator" Mar 17 17:52:52.552703 kubelet[2721]: E0317 17:52:52.551892 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f17b226-cf4f-474d-8289-5da60396aaab" containerName="mount-bpf-fs" Mar 17 17:52:52.552703 kubelet[2721]: E0317 17:52:52.551905 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f17b226-cf4f-474d-8289-5da60396aaab" containerName="clean-cilium-state" Mar 17 17:52:52.552703 kubelet[2721]: E0317 17:52:52.551916 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f17b226-cf4f-474d-8289-5da60396aaab" containerName="cilium-agent" Mar 17 17:52:52.552703 kubelet[2721]: E0317 17:52:52.551927 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f17b226-cf4f-474d-8289-5da60396aaab" containerName="mount-cgroup" Mar 17 17:52:52.552703 kubelet[2721]: E0317 17:52:52.551937 2721 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f17b226-cf4f-474d-8289-5da60396aaab" containerName="apply-sysctl-overwrites" Mar 17 17:52:52.552703 kubelet[2721]: I0317 17:52:52.551969 2721 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e080c2d-91f0-4acf-ade8-628bce5b23f8" containerName="cilium-operator" Mar 17 17:52:52.552703 kubelet[2721]: I0317 17:52:52.551980 2721 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f17b226-cf4f-474d-8289-5da60396aaab" containerName="cilium-agent" Mar 17 17:52:52.558001 systemd[1]: sshd@26-10.128.0.84:22-139.178.89.65:39540.service: Deactivated successfully. Mar 17 17:52:52.564966 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:52:52.566932 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:52:52.576336 systemd-logind[1468]: Removed session 27. Mar 17 17:52:52.584293 systemd[1]: Created slice kubepods-burstable-pod05a38964_d883_418f_9fda_d743e0b9c956.slice - libcontainer container kubepods-burstable-pod05a38964_d883_418f_9fda_d743e0b9c956.slice. Mar 17 17:52:52.613242 systemd[1]: Started sshd@27-10.128.0.84:22-139.178.89.65:39550.service - OpenSSH per-connection server daemon (139.178.89.65:39550). Mar 17 17:52:52.634010 kubelet[2721]: I0317 17:52:52.633529 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-host-proc-sys-net\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634010 kubelet[2721]: I0317 17:52:52.633584 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slzbg\" (UniqueName: \"kubernetes.io/projected/05a38964-d883-418f-9fda-d743e0b9c956-kube-api-access-slzbg\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634010 kubelet[2721]: I0317 17:52:52.633618 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-cilium-cgroup\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634010 kubelet[2721]: I0317 17:52:52.633652 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05a38964-d883-418f-9fda-d743e0b9c956-clustermesh-secrets\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634010 kubelet[2721]: I0317 17:52:52.633680 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05a38964-d883-418f-9fda-d743e0b9c956-hubble-tls\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634010 kubelet[2721]: I0317 17:52:52.633707 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-bpf-maps\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634497 kubelet[2721]: I0317 17:52:52.633746 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-hostproc\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634497 kubelet[2721]: I0317 17:52:52.633771 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-xtables-lock\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634497 kubelet[2721]: I0317 17:52:52.633815 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-host-proc-sys-kernel\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634497 kubelet[2721]: I0317 17:52:52.633841 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-lib-modules\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634497 kubelet[2721]: I0317 17:52:52.633866 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-cilium-run\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.634497 kubelet[2721]: I0317 17:52:52.633893 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-cni-path\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.636811 kubelet[2721]: I0317 17:52:52.633919 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05a38964-d883-418f-9fda-d743e0b9c956-etc-cni-netd\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.636811 kubelet[2721]: I0317 17:52:52.633951 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05a38964-d883-418f-9fda-d743e0b9c956-cilium-config-path\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.636811 kubelet[2721]: I0317 17:52:52.633978 2721 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05a38964-d883-418f-9fda-d743e0b9c956-cilium-ipsec-secrets\") pod \"cilium-r7sng\" (UID: \"05a38964-d883-418f-9fda-d743e0b9c956\") " pod="kube-system/cilium-r7sng" Mar 17 17:52:52.905914 containerd[1481]: time="2025-03-17T17:52:52.905847222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r7sng,Uid:05a38964-d883-418f-9fda-d743e0b9c956,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:52.940768 containerd[1481]: time="2025-03-17T17:52:52.940202149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:52.940768 containerd[1481]: time="2025-03-17T17:52:52.940362845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:52.940768 containerd[1481]: time="2025-03-17T17:52:52.940415913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:52.940768 containerd[1481]: time="2025-03-17T17:52:52.940617468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:52.947585 sshd[4532]: Accepted publickey for core from 139.178.89.65 port 39550 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:52.955592 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:52.967498 systemd-logind[1468]: New session 28 of user core. Mar 17 17:52:52.972197 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:52:52.985260 systemd[1]: Started cri-containerd-358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2.scope - libcontainer container 358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2. Mar 17 17:52:53.021733 containerd[1481]: time="2025-03-17T17:52:53.021324533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r7sng,Uid:05a38964-d883-418f-9fda-d743e0b9c956,Namespace:kube-system,Attempt:0,} returns sandbox id \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\"" Mar 17 17:52:53.025340 containerd[1481]: time="2025-03-17T17:52:53.025055965Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:52:53.040684 containerd[1481]: time="2025-03-17T17:52:53.040615559Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ea83f7bee7a79a40bece3b7591dec9c0ce38ba487c0e780069f9f34c2f46121\"" Mar 17 17:52:53.041384 containerd[1481]: time="2025-03-17T17:52:53.041345067Z" level=info msg="StartContainer for \"6ea83f7bee7a79a40bece3b7591dec9c0ce38ba487c0e780069f9f34c2f46121\"" Mar 17 17:52:53.079042 systemd[1]: Started cri-containerd-6ea83f7bee7a79a40bece3b7591dec9c0ce38ba487c0e780069f9f34c2f46121.scope - libcontainer container 6ea83f7bee7a79a40bece3b7591dec9c0ce38ba487c0e780069f9f34c2f46121. Mar 17 17:52:53.114763 containerd[1481]: time="2025-03-17T17:52:53.114700345Z" level=info msg="StartContainer for \"6ea83f7bee7a79a40bece3b7591dec9c0ce38ba487c0e780069f9f34c2f46121\" returns successfully" Mar 17 17:52:53.129507 systemd[1]: cri-containerd-6ea83f7bee7a79a40bece3b7591dec9c0ce38ba487c0e780069f9f34c2f46121.scope: Deactivated successfully. Mar 17 17:52:53.162769 sshd[4566]: Connection closed by 139.178.89.65 port 39550 Mar 17 17:52:53.163187 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:53.171517 systemd[1]: sshd@27-10.128.0.84:22-139.178.89.65:39550.service: Deactivated successfully. Mar 17 17:52:53.171887 containerd[1481]: time="2025-03-17T17:52:53.171599384Z" level=info msg="shim disconnected" id=6ea83f7bee7a79a40bece3b7591dec9c0ce38ba487c0e780069f9f34c2f46121 namespace=k8s.io Mar 17 17:52:53.171887 containerd[1481]: time="2025-03-17T17:52:53.171667260Z" level=warning msg="cleaning up after shim disconnected" id=6ea83f7bee7a79a40bece3b7591dec9c0ce38ba487c0e780069f9f34c2f46121 namespace=k8s.io Mar 17 17:52:53.171887 containerd[1481]: time="2025-03-17T17:52:53.171681139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:53.176342 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:52:53.178019 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:52:53.180112 systemd-logind[1468]: Removed session 28. Mar 17 17:52:53.222718 systemd[1]: Started sshd@28-10.128.0.84:22-139.178.89.65:39560.service - OpenSSH per-connection server daemon (139.178.89.65:39560). Mar 17 17:52:53.515925 sshd[4649]: Accepted publickey for core from 139.178.89.65 port 39560 ssh2: RSA SHA256:ICepeu+KKCGYPSmsgU9YCx8y7BtMZUq087Glx8y15n8 Mar 17 17:52:53.517554 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:53.524801 systemd-logind[1468]: New session 29 of user core. Mar 17 17:52:53.528996 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:52:53.539875 containerd[1481]: time="2025-03-17T17:52:53.537957694Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:52:53.559484 containerd[1481]: time="2025-03-17T17:52:53.559415403Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c7b4f199fa998fa6e4ecd1269363a183731ec13c0f6ad67ee93ccfb7db47a2c8\"" Mar 17 17:52:53.562407 containerd[1481]: time="2025-03-17T17:52:53.562189903Z" level=info msg="StartContainer for \"c7b4f199fa998fa6e4ecd1269363a183731ec13c0f6ad67ee93ccfb7db47a2c8\"" Mar 17 17:52:53.607070 systemd[1]: Started cri-containerd-c7b4f199fa998fa6e4ecd1269363a183731ec13c0f6ad67ee93ccfb7db47a2c8.scope - libcontainer container c7b4f199fa998fa6e4ecd1269363a183731ec13c0f6ad67ee93ccfb7db47a2c8. Mar 17 17:52:53.649750 containerd[1481]: time="2025-03-17T17:52:53.649690613Z" level=info msg="StartContainer for \"c7b4f199fa998fa6e4ecd1269363a183731ec13c0f6ad67ee93ccfb7db47a2c8\" returns successfully" Mar 17 17:52:53.658491 systemd[1]: cri-containerd-c7b4f199fa998fa6e4ecd1269363a183731ec13c0f6ad67ee93ccfb7db47a2c8.scope: Deactivated successfully. Mar 17 17:52:53.705011 containerd[1481]: time="2025-03-17T17:52:53.704637788Z" level=info msg="shim disconnected" id=c7b4f199fa998fa6e4ecd1269363a183731ec13c0f6ad67ee93ccfb7db47a2c8 namespace=k8s.io Mar 17 17:52:53.705011 containerd[1481]: time="2025-03-17T17:52:53.704734377Z" level=warning msg="cleaning up after shim disconnected" id=c7b4f199fa998fa6e4ecd1269363a183731ec13c0f6ad67ee93ccfb7db47a2c8 namespace=k8s.io Mar 17 17:52:53.705011 containerd[1481]: time="2025-03-17T17:52:53.704750433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:54.144301 kubelet[2721]: E0317 17:52:54.142890 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-swrdq" podUID="6caefc63-a740-4b28-ab1c-b16c5b819455" Mar 17 17:52:54.544191 containerd[1481]: time="2025-03-17T17:52:54.544040630Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:52:54.571303 containerd[1481]: time="2025-03-17T17:52:54.571131111Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0\"" Mar 17 17:52:54.573637 containerd[1481]: time="2025-03-17T17:52:54.573530536Z" level=info msg="StartContainer for \"33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0\"" Mar 17 17:52:54.623163 systemd[1]: Started cri-containerd-33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0.scope - libcontainer container 33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0. Mar 17 17:52:54.672845 systemd[1]: cri-containerd-33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0.scope: Deactivated successfully. Mar 17 17:52:54.674426 containerd[1481]: time="2025-03-17T17:52:54.674383392Z" level=info msg="StartContainer for \"33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0\" returns successfully" Mar 17 17:52:54.704889 containerd[1481]: time="2025-03-17T17:52:54.704814023Z" level=info msg="shim disconnected" id=33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0 namespace=k8s.io Mar 17 17:52:54.705127 containerd[1481]: time="2025-03-17T17:52:54.704934621Z" level=warning msg="cleaning up after shim disconnected" id=33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0 namespace=k8s.io Mar 17 17:52:54.705127 containerd[1481]: time="2025-03-17T17:52:54.704984993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:54.752805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33794ec554bc358f48001bfb241534881138fda09e1c1c4df948ac316ae1ffc0-rootfs.mount: Deactivated successfully. Mar 17 17:52:55.550486 containerd[1481]: time="2025-03-17T17:52:55.548811274Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:52:55.577385 containerd[1481]: time="2025-03-17T17:52:55.574577639Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f\"" Mar 17 17:52:55.577385 containerd[1481]: time="2025-03-17T17:52:55.575437496Z" level=info msg="StartContainer for \"436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f\"" Mar 17 17:52:55.580306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739970849.mount: Deactivated successfully. Mar 17 17:52:55.635001 systemd[1]: Started cri-containerd-436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f.scope - libcontainer container 436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f. Mar 17 17:52:55.682625 containerd[1481]: time="2025-03-17T17:52:55.682570374Z" level=info msg="StartContainer for \"436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f\" returns successfully" Mar 17 17:52:55.692907 systemd[1]: cri-containerd-436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f.scope: Deactivated successfully. Mar 17 17:52:55.737167 containerd[1481]: time="2025-03-17T17:52:55.737088321Z" level=info msg="shim disconnected" id=436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f namespace=k8s.io Mar 17 17:52:55.737167 containerd[1481]: time="2025-03-17T17:52:55.737165837Z" level=warning msg="cleaning up after shim disconnected" id=436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f namespace=k8s.io Mar 17 17:52:55.737484 containerd[1481]: time="2025-03-17T17:52:55.737179641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:52:55.755502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-436f1e752cbbeecfa775dbffc492df51767f32f497c37fb4d45422acb4ce6c7f-rootfs.mount: Deactivated successfully. Mar 17 17:52:56.143321 kubelet[2721]: E0317 17:52:56.142727 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-swrdq" podUID="6caefc63-a740-4b28-ab1c-b16c5b819455" Mar 17 17:52:56.294072 kubelet[2721]: E0317 17:52:56.294018 2721 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:52:56.555360 containerd[1481]: time="2025-03-17T17:52:56.555027177Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:52:56.579297 containerd[1481]: time="2025-03-17T17:52:56.579241803Z" level=info msg="CreateContainer within sandbox \"358704d39c5d0c98c9a38845ef26870a1af5e58ce436ad8e9bed348360b8c2c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc6157c15796ea73d868fdb3baebf993511463423ff4fe672a1ef74f79241796\"" Mar 17 17:52:56.581888 containerd[1481]: time="2025-03-17T17:52:56.581839562Z" level=info msg="StartContainer for \"bc6157c15796ea73d868fdb3baebf993511463423ff4fe672a1ef74f79241796\"" Mar 17 17:52:56.630136 systemd[1]: run-containerd-runc-k8s.io-bc6157c15796ea73d868fdb3baebf993511463423ff4fe672a1ef74f79241796-runc.zJU9Rf.mount: Deactivated successfully. Mar 17 17:52:56.637999 systemd[1]: Started cri-containerd-bc6157c15796ea73d868fdb3baebf993511463423ff4fe672a1ef74f79241796.scope - libcontainer container bc6157c15796ea73d868fdb3baebf993511463423ff4fe672a1ef74f79241796. Mar 17 17:52:56.690836 containerd[1481]: time="2025-03-17T17:52:56.689744719Z" level=info msg="StartContainer for \"bc6157c15796ea73d868fdb3baebf993511463423ff4fe672a1ef74f79241796\" returns successfully" Mar 17 17:52:57.187196 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 17:52:58.143246 kubelet[2721]: E0317 17:52:58.142959 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-swrdq" podUID="6caefc63-a740-4b28-ab1c-b16c5b819455" Mar 17 17:52:58.747616 kubelet[2721]: I0317 17:52:58.747349 2721 setters.go:580] "Node became not ready" node="ci-4230-1-0-c35cfe960af5c690218c.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:52:58Z","lastTransitionTime":"2025-03-17T17:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:53:00.143329 kubelet[2721]: E0317 17:53:00.143008 2721 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-swrdq" podUID="6caefc63-a740-4b28-ab1c-b16c5b819455" Mar 17 17:53:00.421364 systemd-networkd[1390]: lxc_health: Link UP Mar 17 17:53:00.428060 systemd-networkd[1390]: lxc_health: Gained carrier Mar 17 17:53:00.939043 kubelet[2721]: I0317 17:53:00.938661 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r7sng" podStartSLOduration=8.938634588 podStartE2EDuration="8.938634588s" podCreationTimestamp="2025-03-17 17:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:57.576637104 +0000 UTC m=+131.640371956" watchObservedRunningTime="2025-03-17 17:53:00.938634588 +0000 UTC m=+135.002369440" Mar 17 17:53:02.165950 systemd-networkd[1390]: lxc_health: Gained IPv6LL Mar 17 17:53:04.808482 ntpd[1450]: Listen normally on 15 lxc_health [fe80::888c:f4ff:fed6:e23a%14]:123 Mar 17 17:53:04.809058 ntpd[1450]: 17 Mar 17:53:04 ntpd[1450]: Listen normally on 15 lxc_health [fe80::888c:f4ff:fed6:e23a%14]:123 Mar 17 17:53:06.795013 systemd[1]: run-containerd-runc-k8s.io-bc6157c15796ea73d868fdb3baebf993511463423ff4fe672a1ef74f79241796-runc.KvGvD4.mount: Deactivated successfully. Mar 17 17:53:06.967230 sshd[4651]: Connection closed by 139.178.89.65 port 39560 Mar 17 17:53:06.968397 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:06.973855 systemd[1]: sshd@28-10.128.0.84:22-139.178.89.65:39560.service: Deactivated successfully. Mar 17 17:53:06.976696 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:53:06.979347 systemd-logind[1468]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:53:06.981438 systemd-logind[1468]: Removed session 29.