Nov 6 23:36:29.251929 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 22:02:38 -00 2025 Nov 6 23:36:29.251981 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:36:29.251999 kernel: BIOS-provided physical RAM map: Nov 6 23:36:29.252014 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 6 23:36:29.252027 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 6 23:36:29.252041 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 6 23:36:29.252087 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 6 23:36:29.252103 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 6 23:36:29.252121 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Nov 6 23:36:29.252134 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Nov 6 23:36:29.252158 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Nov 6 23:36:29.252173 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Nov 6 23:36:29.252187 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 6 23:36:29.252202 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 6 23:36:29.252223 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 6 23:36:29.252249 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 6 23:36:29.252265 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 6 23:36:29.252280 kernel: NX (Execute Disable) protection: active Nov 6 23:36:29.252296 kernel: APIC: Static calls initialized Nov 6 23:36:29.252312 kernel: efi: EFI v2.7 by EDK II Nov 6 23:36:29.252328 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Nov 6 23:36:29.252344 kernel: random: crng init done Nov 6 23:36:29.252359 kernel: secureboot: Secure boot disabled Nov 6 23:36:29.252374 kernel: SMBIOS 2.4 present. Nov 6 23:36:29.252394 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 6 23:36:29.252409 kernel: Hypervisor detected: KVM Nov 6 23:36:29.252425 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 23:36:29.252441 kernel: kvm-clock: using sched offset of 15440459814 cycles Nov 6 23:36:29.252457 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 23:36:29.252474 kernel: tsc: Detected 2299.998 MHz processor Nov 6 23:36:29.252489 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 23:36:29.252506 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 23:36:29.252522 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 6 23:36:29.252538 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 6 23:36:29.252558 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 23:36:29.252574 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 6 23:36:29.252590 kernel: Using GB pages for direct mapping Nov 6 23:36:29.252606 kernel: ACPI: Early table checksum verification disabled Nov 6 23:36:29.252622 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 6 23:36:29.252639 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 6 23:36:29.252661 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 6 23:36:29.252682 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 6 23:36:29.252699 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 6 23:36:29.252716 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 6 23:36:29.252733 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 6 23:36:29.252750 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 6 23:36:29.252777 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 6 23:36:29.252794 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 6 23:36:29.252815 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 6 23:36:29.252832 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 6 23:36:29.252849 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 6 23:36:29.252866 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 6 23:36:29.252883 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 6 23:36:29.252900 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 6 23:36:29.252917 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 6 23:36:29.252934 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 6 23:36:29.252951 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 6 23:36:29.252971 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 6 23:36:29.252988 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 6 23:36:29.253004 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 6 23:36:29.253020 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 23:36:29.253035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 6 23:36:29.253070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 6 23:36:29.253099 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 6 23:36:29.253135 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 6 23:36:29.253178 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 6 23:36:29.253196 kernel: Zone ranges: Nov 6 23:36:29.253212 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 23:36:29.253226 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 6 23:36:29.253251 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 6 23:36:29.253269 kernel: Movable zone start for each node Nov 6 23:36:29.253284 kernel: Early memory node ranges Nov 6 23:36:29.253318 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 6 23:36:29.253334 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 6 23:36:29.253350 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Nov 6 23:36:29.253371 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Nov 6 23:36:29.253387 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 6 23:36:29.253405 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 6 23:36:29.253423 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 6 23:36:29.253441 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 23:36:29.253459 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 6 23:36:29.253477 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 6 23:36:29.253494 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Nov 6 23:36:29.253512 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 6 23:36:29.253535 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 6 23:36:29.253552 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 6 23:36:29.253569 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 23:36:29.253587 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 23:36:29.253605 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 23:36:29.253623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 23:36:29.253640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 23:36:29.253657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 23:36:29.253675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 23:36:29.253703 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 6 23:36:29.253720 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 6 23:36:29.253738 kernel: Booting paravirtualized kernel on KVM Nov 6 23:36:29.253756 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 23:36:29.253773 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 23:36:29.253790 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 6 23:36:29.253807 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 6 23:36:29.253824 kernel: pcpu-alloc: [0] 0 1 Nov 6 23:36:29.253841 kernel: kvm-guest: PV spinlocks enabled Nov 6 23:36:29.253862 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 23:36:29.253881 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:36:29.253899 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 6 23:36:29.253916 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 23:36:29.253933 kernel: Fallback order for Node 0: 0 Nov 6 23:36:29.253950 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Nov 6 23:36:29.253966 kernel: Policy zone: Normal Nov 6 23:36:29.253984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:36:29.254004 kernel: software IO TLB: area num 2. Nov 6 23:36:29.254022 kernel: Memory: 7511308K/7860544K available (14336K kernel code, 2288K rwdata, 22872K rodata, 43520K init, 1560K bss, 348980K reserved, 0K cma-reserved) Nov 6 23:36:29.254039 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 23:36:29.254082 kernel: Kernel/User page tables isolation: enabled Nov 6 23:36:29.254099 kernel: ftrace: allocating 37954 entries in 149 pages Nov 6 23:36:29.254116 kernel: ftrace: allocated 149 pages with 4 groups Nov 6 23:36:29.254133 kernel: Dynamic Preempt: voluntary Nov 6 23:36:29.254150 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:36:29.254191 kernel: rcu: RCU event tracing is enabled. Nov 6 23:36:29.254209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 23:36:29.254228 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:36:29.254339 kernel: Rude variant of Tasks RCU enabled. Nov 6 23:36:29.254373 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:36:29.254391 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:36:29.254409 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 23:36:29.254428 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 23:36:29.254446 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:36:29.254468 kernel: Console: colour dummy device 80x25 Nov 6 23:36:29.254494 kernel: printk: console [ttyS0] enabled Nov 6 23:36:29.254512 kernel: ACPI: Core revision 20230628 Nov 6 23:36:29.254530 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 23:36:29.254548 kernel: x2apic enabled Nov 6 23:36:29.254567 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 23:36:29.254585 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 6 23:36:29.254604 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 6 23:36:29.254622 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 6 23:36:29.254644 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 6 23:36:29.254663 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 6 23:36:29.254680 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 23:36:29.254697 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 6 23:36:29.254730 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 6 23:36:29.254746 kernel: Spectre V2 : Mitigation: IBRS Nov 6 23:36:29.254765 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 23:36:29.254783 kernel: RETBleed: Mitigation: IBRS Nov 6 23:36:29.254806 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 23:36:29.254829 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 6 23:36:29.254847 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 23:36:29.254866 kernel: MDS: Mitigation: Clear CPU buffers Nov 6 23:36:29.254885 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 23:36:29.254904 kernel: active return thunk: its_return_thunk Nov 6 23:36:29.254923 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 23:36:29.254942 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 23:36:29.254962 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 23:36:29.254980 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 23:36:29.255003 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 23:36:29.255022 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 6 23:36:29.255042 kernel: Freeing SMP alternatives memory: 32K Nov 6 23:36:29.255094 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:36:29.255122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 6 23:36:29.255139 kernel: landlock: Up and running. Nov 6 23:36:29.255154 kernel: SELinux: Initializing. Nov 6 23:36:29.255170 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 23:36:29.255187 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 23:36:29.255210 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 6 23:36:29.255237 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:36:29.255256 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:36:29.255276 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 23:36:29.255292 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 6 23:36:29.255309 kernel: signal: max sigframe size: 1776 Nov 6 23:36:29.255324 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:36:29.255347 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:36:29.255368 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 23:36:29.255385 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:36:29.255402 kernel: smpboot: x86: Booting SMP configuration: Nov 6 23:36:29.255419 kernel: .... node #0, CPUs: #1 Nov 6 23:36:29.255437 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 6 23:36:29.255455 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 6 23:36:29.255472 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 23:36:29.255489 kernel: smpboot: Max logical packages: 1 Nov 6 23:36:29.255506 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 6 23:36:29.255527 kernel: devtmpfs: initialized Nov 6 23:36:29.255544 kernel: x86/mm: Memory block size: 128MB Nov 6 23:36:29.255576 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 6 23:36:29.255594 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:36:29.255611 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 23:36:29.255628 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:36:29.255648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:36:29.255665 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:36:29.255684 kernel: audit: type=2000 audit(1762472186.730:1): state=initialized audit_enabled=0 res=1 Nov 6 23:36:29.255707 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:36:29.255736 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 23:36:29.255755 kernel: cpuidle: using governor menu Nov 6 23:36:29.255772 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:36:29.255792 kernel: dca service started, version 1.12.1 Nov 6 23:36:29.255811 kernel: PCI: Using configuration type 1 for base access Nov 6 23:36:29.255830 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 23:36:29.255849 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 23:36:29.255868 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 23:36:29.255891 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:36:29.255911 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:36:29.255930 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:36:29.255949 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:36:29.255968 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:36:29.255987 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 6 23:36:29.256006 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 6 23:36:29.256025 kernel: ACPI: Interpreter enabled Nov 6 23:36:29.256060 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 23:36:29.256084 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 23:36:29.256103 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 23:36:29.256122 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 6 23:36:29.256141 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 6 23:36:29.256160 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 23:36:29.256465 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 6 23:36:29.256673 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 6 23:36:29.256871 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 6 23:36:29.256895 kernel: PCI host bridge to bus 0000:00 Nov 6 23:36:29.259224 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 23:36:29.259443 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 23:36:29.259625 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 23:36:29.259799 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 6 23:36:29.259972 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 23:36:29.260246 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 6 23:36:29.260456 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 6 23:36:29.260670 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 6 23:36:29.260866 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 6 23:36:29.262402 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 6 23:36:29.262959 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 6 23:36:29.263194 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 6 23:36:29.263572 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 6 23:36:29.263773 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 6 23:36:29.263971 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 6 23:36:29.264324 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 6 23:36:29.264536 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 6 23:36:29.264732 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 6 23:36:29.264764 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 23:36:29.264784 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 23:36:29.264804 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 23:36:29.264823 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 23:36:29.264841 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 6 23:36:29.264860 kernel: iommu: Default domain type: Translated Nov 6 23:36:29.264880 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 23:36:29.264900 kernel: efivars: Registered efivars operations Nov 6 23:36:29.264919 kernel: PCI: Using ACPI for IRQ routing Nov 6 23:36:29.264942 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 23:36:29.264961 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 6 23:36:29.264980 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 6 23:36:29.264998 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Nov 6 23:36:29.265017 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 6 23:36:29.265036 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 6 23:36:29.265070 kernel: vgaarb: loaded Nov 6 23:36:29.265087 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 23:36:29.265106 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:36:29.265128 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:36:29.265145 kernel: pnp: PnP ACPI init Nov 6 23:36:29.265165 kernel: pnp: PnP ACPI: found 7 devices Nov 6 23:36:29.265182 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 23:36:29.265199 kernel: NET: Registered PF_INET protocol family Nov 6 23:36:29.265225 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 23:36:29.265251 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 6 23:36:29.265269 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:36:29.265287 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 23:36:29.265311 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 6 23:36:29.265327 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 6 23:36:29.265346 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 23:36:29.265364 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 6 23:36:29.265383 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:36:29.265402 kernel: NET: Registered PF_XDP protocol family Nov 6 23:36:29.265620 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 23:36:29.265809 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 23:36:29.266002 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 23:36:29.270151 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 6 23:36:29.270411 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 23:36:29.270440 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:36:29.270459 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 6 23:36:29.270478 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 6 23:36:29.270496 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 23:36:29.270514 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 6 23:36:29.270540 kernel: clocksource: Switched to clocksource tsc Nov 6 23:36:29.270558 kernel: Initialise system trusted keyrings Nov 6 23:36:29.270576 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 6 23:36:29.270595 kernel: Key type asymmetric registered Nov 6 23:36:29.270612 kernel: Asymmetric key parser 'x509' registered Nov 6 23:36:29.270630 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 6 23:36:29.270648 kernel: io scheduler mq-deadline registered Nov 6 23:36:29.270666 kernel: io scheduler kyber registered Nov 6 23:36:29.270684 kernel: io scheduler bfq registered Nov 6 23:36:29.270707 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 23:36:29.270726 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 6 23:36:29.270927 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 6 23:36:29.270951 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 6 23:36:29.271178 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 6 23:36:29.271204 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 6 23:36:29.271410 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 6 23:36:29.271437 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:36:29.271462 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 23:36:29.271481 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 6 23:36:29.271499 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 6 23:36:29.271515 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 6 23:36:29.271737 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 6 23:36:29.271762 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 23:36:29.271780 kernel: i8042: Warning: Keylock active Nov 6 23:36:29.271798 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 23:36:29.271816 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 23:36:29.272024 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 6 23:36:29.272329 kernel: rtc_cmos 00:00: registered as rtc0 Nov 6 23:36:29.272522 kernel: rtc_cmos 00:00: setting system clock to 2025-11-06T23:36:28 UTC (1762472188) Nov 6 23:36:29.272707 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 6 23:36:29.272732 kernel: intel_pstate: CPU model not supported Nov 6 23:36:29.272751 kernel: pstore: Using crash dump compression: deflate Nov 6 23:36:29.272771 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 23:36:29.272796 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:36:29.272815 kernel: Segment Routing with IPv6 Nov 6 23:36:29.272835 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:36:29.272855 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:36:29.272875 kernel: Key type dns_resolver registered Nov 6 23:36:29.272894 kernel: IPI shorthand broadcast: enabled Nov 6 23:36:29.272913 kernel: sched_clock: Marking stable (1587004745, 450701406)->(2527262574, -489556423) Nov 6 23:36:29.272932 kernel: registered taskstats version 1 Nov 6 23:36:29.272952 kernel: Loading compiled-in X.509 certificates Nov 6 23:36:29.272972 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: d06f6bc77ef9183fbb55ec1fc021fe2cce974996' Nov 6 23:36:29.272995 kernel: Key type .fscrypt registered Nov 6 23:36:29.273012 kernel: Key type fscrypt-provisioning registered Nov 6 23:36:29.273032 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:36:29.273369 kernel: ima: No architecture policies found Nov 6 23:36:29.273391 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 23:36:29.273411 kernel: clk: Disabling unused clocks Nov 6 23:36:29.273430 kernel: Freeing unused kernel image (initmem) memory: 43520K Nov 6 23:36:29.273449 kernel: Write protecting the kernel read-only data: 38912k Nov 6 23:36:29.273468 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Nov 6 23:36:29.273634 kernel: Run /init as init process Nov 6 23:36:29.273653 kernel: with arguments: Nov 6 23:36:29.273672 kernel: /init Nov 6 23:36:29.273691 kernel: with environment: Nov 6 23:36:29.273710 kernel: HOME=/ Nov 6 23:36:29.273852 kernel: TERM=linux Nov 6 23:36:29.273875 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:36:29.273900 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:36:29.273926 systemd[1]: Detected virtualization google. Nov 6 23:36:29.274081 systemd[1]: Detected architecture x86-64. Nov 6 23:36:29.274098 systemd[1]: Running in initrd. Nov 6 23:36:29.274115 systemd[1]: No hostname configured, using default hostname. Nov 6 23:36:29.274134 systemd[1]: Hostname set to . Nov 6 23:36:29.274153 systemd[1]: Initializing machine ID from random generator. Nov 6 23:36:29.274169 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:36:29.274323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:36:29.274344 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:36:29.274366 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:36:29.274385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:36:29.274405 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:36:29.274427 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:36:29.274449 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 23:36:29.274602 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 23:36:29.274644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:36:29.274669 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:36:29.274856 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:36:29.274878 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:36:29.274899 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:36:29.274925 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:36:29.274945 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:36:29.274966 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:36:29.274987 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:36:29.275009 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:36:29.275030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:36:29.277222 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:36:29.277285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:36:29.277311 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:36:29.277343 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:36:29.277363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:36:29.277517 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:36:29.277539 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:36:29.277559 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:36:29.277585 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:36:29.277605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:36:29.277754 systemd-journald[185]: Collecting audit messages is disabled. Nov 6 23:36:29.277804 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:36:29.277825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:36:29.277850 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:36:29.277871 systemd-journald[185]: Journal started Nov 6 23:36:29.277911 systemd-journald[185]: Runtime Journal (/run/log/journal/4ea9a1b3e4964189bbfb2bef7f87c794) is 8M, max 148.6M, 140.6M free. Nov 6 23:36:29.251239 systemd-modules-load[186]: Inserted module 'overlay' Nov 6 23:36:29.293737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:36:29.302081 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:36:29.307092 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:36:29.307943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:36:29.314600 kernel: Bridge firewalling registered Nov 6 23:36:29.311607 systemd-modules-load[186]: Inserted module 'br_netfilter' Nov 6 23:36:29.315605 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:36:29.328787 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:36:29.337350 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:36:29.349311 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:36:29.351287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:36:29.372258 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:36:29.378512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:36:29.397098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:36:29.398979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:36:29.406330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:36:29.413443 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:36:29.418783 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:36:29.464330 systemd-resolved[219]: Positive Trust Anchors: Nov 6 23:36:29.464924 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:36:29.475587 dracut-cmdline[221]: dracut-dracut-053 Nov 6 23:36:29.475587 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=1a4810aa24298684dd9efd264f1d9b812e4e16f32429f4615db9ff284dd4ac25 Nov 6 23:36:29.464996 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:36:29.472530 systemd-resolved[219]: Defaulting to hostname 'linux'. Nov 6 23:36:29.474375 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:36:29.480339 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:36:29.586100 kernel: SCSI subsystem initialized Nov 6 23:36:29.599105 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:36:29.611115 kernel: iscsi: registered transport (tcp) Nov 6 23:36:29.638595 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:36:29.638708 kernel: QLogic iSCSI HBA Driver Nov 6 23:36:29.693203 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:36:29.705336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:36:29.744129 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:36:29.744222 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:36:29.744259 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 6 23:36:29.791109 kernel: raid6: avx2x4 gen() 17879 MB/s Nov 6 23:36:29.808149 kernel: raid6: avx2x2 gen() 17305 MB/s Nov 6 23:36:29.826201 kernel: raid6: avx2x1 gen() 13927 MB/s Nov 6 23:36:29.826300 kernel: raid6: using algorithm avx2x4 gen() 17879 MB/s Nov 6 23:36:29.844817 kernel: raid6: .... xor() 7613 MB/s, rmw enabled Nov 6 23:36:29.844897 kernel: raid6: using avx2x2 recovery algorithm Nov 6 23:36:29.869104 kernel: xor: automatically using best checksumming function avx Nov 6 23:36:30.055167 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:36:30.070204 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:36:30.078566 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:36:30.124391 systemd-udevd[403]: Using default interface naming scheme 'v255'. Nov 6 23:36:30.135607 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:36:30.154353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:36:30.194221 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Nov 6 23:36:30.239738 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:36:30.244565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:36:30.369691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:36:30.391425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:36:30.447656 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:36:30.452896 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:36:30.473227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:36:30.480237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:36:30.504333 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:36:30.541091 kernel: scsi host0: Virtio SCSI HBA Nov 6 23:36:30.557091 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 6 23:36:30.563089 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 23:36:30.601006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:36:30.638648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:36:30.698435 kernel: AVX2 version of gcm_enc/dec engaged. Nov 6 23:36:30.698487 kernel: AES CTR mode by8 optimization enabled Nov 6 23:36:30.638944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:36:30.640430 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:36:30.769339 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 6 23:36:30.769709 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 6 23:36:30.769965 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 6 23:36:30.770224 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 6 23:36:30.770461 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 6 23:36:30.640493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:36:30.640724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:36:30.642268 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:36:30.767958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:36:30.816422 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 23:36:30.816490 kernel: GPT:17805311 != 33554431 Nov 6 23:36:30.816516 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 23:36:30.816540 kernel: GPT:17805311 != 33554431 Nov 6 23:36:30.816565 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 23:36:30.816586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:36:30.816608 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 6 23:36:30.839317 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:36:30.861345 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:36:30.937458 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:36:30.941072 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (454) Nov 6 23:36:30.952076 kernel: BTRFS: device fsid 7e63b391-7474-48b8-9614-cf161680d90d devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (452) Nov 6 23:36:30.978877 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 6 23:36:31.027537 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 6 23:36:31.046435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 6 23:36:31.059449 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 6 23:36:31.059656 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 6 23:36:31.085384 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:36:31.128705 disk-uuid[553]: Primary Header is updated. Nov 6 23:36:31.128705 disk-uuid[553]: Secondary Entries is updated. Nov 6 23:36:31.128705 disk-uuid[553]: Secondary Header is updated. Nov 6 23:36:31.153252 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:36:32.200084 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 23:36:32.200440 disk-uuid[554]: The operation has completed successfully. Nov 6 23:36:32.350711 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:36:32.350893 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:36:32.429563 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 23:36:32.451873 sh[568]: Success Nov 6 23:36:32.480319 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 6 23:36:32.678672 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:36:32.696310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 23:36:32.702800 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 23:36:32.768475 kernel: BTRFS info (device dm-0): first mount of filesystem 7e63b391-7474-48b8-9614-cf161680d90d Nov 6 23:36:32.768573 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:36:32.768599 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 6 23:36:32.772247 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:36:32.777226 kernel: BTRFS info (device dm-0): using free space tree Nov 6 23:36:32.832155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 23:36:32.844965 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 23:36:32.854327 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:36:32.860666 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:36:32.876782 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:36:32.930590 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:36:32.930686 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:36:32.930712 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:36:32.945643 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 23:36:32.945730 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:36:32.956209 kernel: BTRFS info (device sda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:36:32.966221 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:36:32.976359 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:36:33.089458 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:36:33.108371 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:36:33.231072 systemd-networkd[748]: lo: Link UP Nov 6 23:36:33.231088 systemd-networkd[748]: lo: Gained carrier Nov 6 23:36:33.234862 systemd-networkd[748]: Enumeration completed Nov 6 23:36:33.236318 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:36:33.236536 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:36:33.236544 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:36:33.270897 ignition[674]: Ignition 2.20.0 Nov 6 23:36:33.240136 systemd-networkd[748]: eth0: Link UP Nov 6 23:36:33.270908 ignition[674]: Stage: fetch-offline Nov 6 23:36:33.240156 systemd-networkd[748]: eth0: Gained carrier Nov 6 23:36:33.270965 ignition[674]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:36:33.240181 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:36:33.270981 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 23:36:33.255723 systemd-networkd[748]: eth0: Overlong DHCP hostname received, shortened from 'ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf.c.flatcar-212911.internal' to 'ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf' Nov 6 23:36:33.271368 ignition[674]: parsed url from cmdline: "" Nov 6 23:36:33.255748 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.22/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 6 23:36:33.271378 ignition[674]: no config URL provided Nov 6 23:36:33.264398 systemd[1]: Reached target network.target - Network. Nov 6 23:36:33.271387 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:36:33.273619 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:36:33.271405 ignition[674]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:36:33.290468 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 23:36:33.271417 ignition[674]: failed to fetch config: resource requires networking Nov 6 23:36:33.351455 unknown[757]: fetched base config from "system" Nov 6 23:36:33.271748 ignition[674]: Ignition finished successfully Nov 6 23:36:33.351469 unknown[757]: fetched base config from "system" Nov 6 23:36:33.335464 ignition[757]: Ignition 2.20.0 Nov 6 23:36:33.351479 unknown[757]: fetched user config from "gcp" Nov 6 23:36:33.335607 ignition[757]: Stage: fetch Nov 6 23:36:33.355721 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 23:36:33.335926 ignition[757]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:36:33.373341 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:36:33.335948 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 23:36:33.421040 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:36:33.336202 ignition[757]: parsed url from cmdline: "" Nov 6 23:36:33.447645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:36:33.336214 ignition[757]: no config URL provided Nov 6 23:36:33.505018 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:36:33.336227 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:36:33.512084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:36:33.336253 ignition[757]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:36:33.528487 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:36:33.336320 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 6 23:36:33.536517 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:36:33.343971 ignition[757]: GET result: OK Nov 6 23:36:33.546346 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:36:33.344107 ignition[757]: parsing config with SHA512: 174f4b3c65f593ca0a2e48c4f2e3c8bff59800c28bf616200cfc3487a2cb79c3db8f59a82704b0fa9668e2838aded25faaf1dc9cfd420035829a5ad1b1707dd1 Nov 6 23:36:33.557328 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:36:33.352510 ignition[757]: fetch: fetch complete Nov 6 23:36:33.575834 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:36:33.352520 ignition[757]: fetch: fetch passed Nov 6 23:36:33.352604 ignition[757]: Ignition finished successfully Nov 6 23:36:33.415562 ignition[763]: Ignition 2.20.0 Nov 6 23:36:33.415572 ignition[763]: Stage: kargs Nov 6 23:36:33.415787 ignition[763]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:36:33.415799 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 23:36:33.417845 ignition[763]: kargs: kargs passed Nov 6 23:36:33.417934 ignition[763]: Ignition finished successfully Nov 6 23:36:33.501869 ignition[769]: Ignition 2.20.0 Nov 6 23:36:33.501985 ignition[769]: Stage: disks Nov 6 23:36:33.502497 ignition[769]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:36:33.502513 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 23:36:33.503604 ignition[769]: disks: disks passed Nov 6 23:36:33.503667 ignition[769]: Ignition finished successfully Nov 6 23:36:33.651519 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 6 23:36:33.660419 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:36:33.673385 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:36:33.835213 kernel: EXT4-fs (sda9): mounted filesystem 2abcf372-764b-46c0-a870-42c779c5f871 r/w with ordered data mode. Quota mode: none. Nov 6 23:36:33.836398 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:36:33.837369 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:36:33.864247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:36:33.877248 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:36:33.887747 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 23:36:33.887911 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:36:33.887963 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:36:33.913138 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (786) Nov 6 23:36:33.921177 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:36:33.921280 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:36:33.927431 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:36:33.929348 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:36:33.945290 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 23:36:33.945338 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:36:33.949388 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:36:33.961575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:36:34.140239 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:36:34.152310 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:36:34.161213 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:36:34.172029 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:36:34.299387 systemd-networkd[748]: eth0: Gained IPv6LL Nov 6 23:36:34.407398 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:36:34.424308 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:36:34.435385 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:36:34.465520 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:36:34.472316 kernel: BTRFS info (device sda6): last unmount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:36:34.511088 ignition[898]: INFO : Ignition 2.20.0 Nov 6 23:36:34.511088 ignition[898]: INFO : Stage: mount Nov 6 23:36:34.511088 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:36:34.511088 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 23:36:34.540397 ignition[898]: INFO : mount: mount passed Nov 6 23:36:34.540397 ignition[898]: INFO : Ignition finished successfully Nov 6 23:36:34.514560 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:36:34.530391 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:36:34.546792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:36:34.842397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:36:34.885096 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (910) Nov 6 23:36:34.889135 kernel: BTRFS info (device sda6): first mount of filesystem c2193637-3855-459d-ac6d-9b4591136350 Nov 6 23:36:34.889227 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 23:36:34.889255 kernel: BTRFS info (device sda6): using free space tree Nov 6 23:36:34.902393 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 23:36:34.902620 kernel: BTRFS info (device sda6): auto enabling async discard Nov 6 23:36:34.910554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:36:34.950008 ignition[927]: INFO : Ignition 2.20.0 Nov 6 23:36:34.950008 ignition[927]: INFO : Stage: files Nov 6 23:36:34.958284 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:36:34.958284 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 23:36:34.958284 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:36:34.958284 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:36:34.958284 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:36:34.990887 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:36:34.990887 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:36:34.990887 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:36:34.990887 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:36:34.990887 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 23:36:34.966791 unknown[927]: wrote ssh authorized keys file for user: core Nov 6 23:36:35.095198 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:36:35.222601 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 23:36:35.230241 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:36:35.230241 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 23:36:35.450456 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:36:35.614293 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:36:35.621235 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 23:36:36.045081 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:36:36.704969 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 23:36:36.704969 ignition[927]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:36:36.716294 ignition[927]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:36:36.716294 ignition[927]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:36:36.716294 ignition[927]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:36:36.716294 ignition[927]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:36:36.716294 ignition[927]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:36:36.716294 ignition[927]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:36:36.716294 ignition[927]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:36:36.716294 ignition[927]: INFO : files: files passed Nov 6 23:36:36.716294 ignition[927]: INFO : Ignition finished successfully Nov 6 23:36:36.711636 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:36:36.722687 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:36:36.734378 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:36:36.769741 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:36:36.797325 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:36:36.797325 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:36:36.769911 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:36:36.822260 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:36:36.788997 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:36:36.797004 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:36:36.809404 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:36:36.859609 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:36:36.859735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:36:36.860790 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:36:36.866563 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:36:36.871619 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:36:36.878605 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:36:36.917476 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:36:36.924519 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:36:36.953804 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:36:36.954169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:36:36.963522 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:36:36.967895 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:36:36.968571 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:36:36.978616 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:36:36.983789 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:36:36.989958 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:36:37.000438 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:36:37.004807 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:36:37.016415 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:36:37.016906 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:36:37.022862 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:36:37.029990 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:36:37.035672 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:36:37.046393 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:36:37.047025 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:36:37.053761 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:36:37.054278 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:36:37.059985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:36:37.061115 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:36:37.065667 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:36:37.065933 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:36:37.076105 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:36:37.076707 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:36:37.081738 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:36:37.081962 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:36:37.101485 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:36:37.114225 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:36:37.114589 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:36:37.126412 ignition[980]: INFO : Ignition 2.20.0 Nov 6 23:36:37.126412 ignition[980]: INFO : Stage: umount Nov 6 23:36:37.126412 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:36:37.126412 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 6 23:36:37.126256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:36:37.141637 ignition[980]: INFO : umount: umount passed Nov 6 23:36:37.141637 ignition[980]: INFO : Ignition finished successfully Nov 6 23:36:37.129186 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:36:37.129464 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:36:37.135477 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:36:37.135697 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:36:37.151313 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:36:37.153326 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:36:37.153493 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:36:37.162395 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:36:37.162624 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:36:37.173097 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:36:37.173250 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:36:37.174311 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:36:37.174475 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:36:37.189561 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:36:37.189678 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:36:37.197353 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 23:36:37.197456 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 23:36:37.201557 systemd[1]: Stopped target network.target - Network. Nov 6 23:36:37.204758 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:36:37.204867 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:36:37.209579 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:36:37.217362 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:36:37.221217 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:36:37.227489 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:36:37.231478 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:36:37.237716 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:36:37.237898 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:36:37.242537 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:36:37.242613 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:36:37.248500 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:36:37.248695 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:36:37.254594 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:36:37.254693 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:36:37.260502 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:36:37.260706 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:36:37.265231 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:36:37.278610 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:36:37.294905 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:36:37.295099 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:36:37.300071 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 23:36:37.300375 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:36:37.300521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:36:37.305300 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 23:36:37.306776 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:36:37.306833 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:36:37.314253 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:36:37.323203 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:36:37.323332 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:36:37.328350 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:36:37.328483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:36:37.332576 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:36:37.332666 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:36:37.337479 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:36:37.337561 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:36:37.345556 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:36:37.355405 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 23:36:37.355548 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:36:37.359711 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:36:37.359948 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:36:37.374005 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:36:37.374137 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:36:37.379590 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:36:37.379669 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:36:37.379768 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:36:37.548400 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 6 23:36:37.379831 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:36:37.391503 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:36:37.391587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:36:37.400261 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:36:37.400397 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:36:37.421309 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:36:37.433208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:36:37.433341 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:36:37.438665 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:36:37.438907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:36:37.450078 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 23:36:37.450172 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:36:37.450695 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:36:37.450823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:36:37.457801 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:36:37.457935 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:36:37.466554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:36:37.481323 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:36:37.509443 systemd[1]: Switching root. Nov 6 23:36:37.633197 systemd-journald[185]: Journal stopped Nov 6 23:36:40.170676 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:36:40.170758 kernel: SELinux: policy capability open_perms=1 Nov 6 23:36:40.170782 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:36:40.170800 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:36:40.170819 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:36:40.170837 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:36:40.170856 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:36:40.170888 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:36:40.170911 kernel: audit: type=1403 audit(1762472198.242:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:36:40.170932 systemd[1]: Successfully loaded SELinux policy in 44.988ms. Nov 6 23:36:40.170954 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.550ms. Nov 6 23:36:40.170975 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:36:40.170993 systemd[1]: Detected virtualization google. Nov 6 23:36:40.171013 systemd[1]: Detected architecture x86-64. Nov 6 23:36:40.171059 systemd[1]: Detected first boot. Nov 6 23:36:40.171085 systemd[1]: Initializing machine ID from random generator. Nov 6 23:36:40.171106 zram_generator::config[1023]: No configuration found. Nov 6 23:36:40.171130 kernel: Guest personality initialized and is inactive Nov 6 23:36:40.171149 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 23:36:40.171801 kernel: Initialized host personality Nov 6 23:36:40.171834 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:36:40.171856 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:36:40.171881 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 23:36:40.171902 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:36:40.171923 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:36:40.171944 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:36:40.171966 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:36:40.171987 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:36:40.172015 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:36:40.172036 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:36:40.172170 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:36:40.172316 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:36:40.172360 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:36:40.172385 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:36:40.172407 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:36:40.172445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:36:40.172466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:36:40.172486 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:36:40.172507 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:36:40.172528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:36:40.172556 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 23:36:40.172577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:36:40.172604 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:36:40.172629 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:36:40.172651 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:36:40.172672 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:36:40.172694 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:36:40.172716 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:36:40.172738 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:36:40.172762 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:36:40.172786 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:36:40.172815 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:36:40.172837 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:36:40.172864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:36:40.172888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:36:40.172916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:36:40.172943 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:36:40.172966 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:36:40.172989 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:36:40.173012 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:36:40.173035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:36:40.173089 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:36:40.173112 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:36:40.173594 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:36:40.173651 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:36:40.173677 systemd[1]: Reached target machines.target - Containers. Nov 6 23:36:40.173701 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:36:40.173725 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:36:40.173749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:36:40.173771 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:36:40.173793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:36:40.173816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:36:40.173844 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:36:40.173867 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:36:40.173888 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:36:40.173926 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:36:40.173948 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:36:40.173975 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:36:40.173997 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:36:40.174019 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:36:40.174149 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:36:40.174180 kernel: fuse: init (API version 7.39) Nov 6 23:36:40.174324 kernel: ACPI: bus type drm_connector registered Nov 6 23:36:40.174352 kernel: loop: module loaded Nov 6 23:36:40.174376 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:36:40.174400 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:36:40.174423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:36:40.174521 systemd-journald[1111]: Collecting audit messages is disabled. Nov 6 23:36:40.174569 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:36:40.174593 systemd-journald[1111]: Journal started Nov 6 23:36:40.174645 systemd-journald[1111]: Runtime Journal (/run/log/journal/3826361cac2d40df869c2b0ae69132a2) is 8M, max 148.6M, 140.6M free. Nov 6 23:36:39.321348 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:36:39.332312 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 6 23:36:39.333007 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:36:40.194091 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:36:40.214417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:36:40.227122 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 23:36:40.227311 systemd[1]: Stopped verity-setup.service. Nov 6 23:36:40.239103 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:36:40.250127 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:36:40.257593 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:36:40.265566 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:36:40.273563 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:36:40.281563 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:36:40.286797 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:36:40.295798 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:36:40.304118 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:36:40.310878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:36:40.319648 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:36:40.320026 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:36:40.328841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:36:40.329232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:36:40.334657 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:36:40.334989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:36:40.341836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:36:40.342203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:36:40.350837 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:36:40.351201 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:36:40.359035 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:36:40.359797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:36:40.375035 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:36:40.382950 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:36:40.390793 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:36:40.400150 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:36:40.407732 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:36:40.434204 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:36:40.452299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:36:40.461670 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:36:40.468375 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:36:40.468761 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:36:40.478857 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:36:40.496417 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:36:40.504023 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:36:40.510498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:36:40.518871 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:36:40.534407 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:36:40.541492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:36:40.548914 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:36:40.555444 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:36:40.561535 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:36:40.566605 systemd-journald[1111]: Time spent on flushing to /var/log/journal/3826361cac2d40df869c2b0ae69132a2 is 81.045ms for 946 entries. Nov 6 23:36:40.566605 systemd-journald[1111]: System Journal (/var/log/journal/3826361cac2d40df869c2b0ae69132a2) is 8M, max 584.8M, 576.8M free. Nov 6 23:36:40.670516 systemd-journald[1111]: Received client request to flush runtime journal. Nov 6 23:36:40.579462 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:36:40.589086 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:36:40.610031 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 6 23:36:40.621039 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:36:40.632620 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:36:40.639752 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:36:40.650383 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:36:40.678129 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:36:40.686790 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:36:40.708325 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:36:40.721478 kernel: loop0: detected capacity change from 0 to 229808 Nov 6 23:36:40.747175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:36:40.755776 udevadm[1149]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 6 23:36:40.796789 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:36:40.802915 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:36:40.851087 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:36:40.870867 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:36:40.884450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:36:40.904095 kernel: loop1: detected capacity change from 0 to 138176 Nov 6 23:36:40.975815 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Nov 6 23:36:40.975858 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Nov 6 23:36:40.992243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:36:41.046405 kernel: loop2: detected capacity change from 0 to 147912 Nov 6 23:36:41.227102 kernel: loop3: detected capacity change from 0 to 52152 Nov 6 23:36:41.354284 kernel: loop4: detected capacity change from 0 to 229808 Nov 6 23:36:41.432424 kernel: loop5: detected capacity change from 0 to 138176 Nov 6 23:36:41.522442 kernel: loop6: detected capacity change from 0 to 147912 Nov 6 23:36:41.605130 kernel: loop7: detected capacity change from 0 to 52152 Nov 6 23:36:41.661671 (sd-merge)[1172]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Nov 6 23:36:41.664552 (sd-merge)[1172]: Merged extensions into '/usr'. Nov 6 23:36:41.684926 systemd[1]: Reload requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:36:41.685098 systemd[1]: Reloading... Nov 6 23:36:41.867399 zram_generator::config[1196]: No configuration found. Nov 6 23:36:42.115842 ldconfig[1142]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:36:42.145547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:36:42.244458 systemd[1]: Reloading finished in 558 ms. Nov 6 23:36:42.263962 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:36:42.268713 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:36:42.274803 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:36:42.292803 systemd[1]: Starting ensure-sysext.service... Nov 6 23:36:42.300676 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:36:42.309356 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:36:42.332939 systemd[1]: Reload requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:36:42.332960 systemd[1]: Reloading... Nov 6 23:36:42.365608 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:36:42.366166 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:36:42.368604 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:36:42.371705 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Nov 6 23:36:42.372015 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Nov 6 23:36:42.387283 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:36:42.387308 systemd-tmpfiles[1242]: Skipping /boot Nov 6 23:36:42.417184 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Nov 6 23:36:42.456616 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:36:42.456650 systemd-tmpfiles[1242]: Skipping /boot Nov 6 23:36:42.511091 zram_generator::config[1273]: No configuration found. Nov 6 23:36:42.783076 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1282) Nov 6 23:36:42.857077 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 6 23:36:42.901140 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 23:36:42.941772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:36:42.969427 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 23:36:43.001832 kernel: ACPI: button: Power Button [PWRF] Nov 6 23:36:43.076079 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 6 23:36:43.109130 kernel: EDAC MC: Ver: 3.0.0 Nov 6 23:36:43.134255 kernel: ACPI: button: Sleep Button [SLPF] Nov 6 23:36:43.156227 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 23:36:43.223686 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 23:36:43.224115 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 6 23:36:43.239340 systemd[1]: Reloading finished in 905 ms. Nov 6 23:36:43.253949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:36:43.283169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:36:43.331944 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 6 23:36:43.356836 systemd[1]: Finished ensure-sysext.service. Nov 6 23:36:43.411550 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Nov 6 23:36:43.423419 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:36:43.430451 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:36:43.452710 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:36:43.465604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:36:43.484439 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 6 23:36:43.508001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:36:43.531355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:36:43.545064 lvm[1355]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:36:43.553368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:36:43.579333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:36:43.583609 augenrules[1373]: No rules Nov 6 23:36:43.601349 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 6 23:36:43.614524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:36:43.623749 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:36:43.638342 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:36:43.645559 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:36:43.668408 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:36:43.692342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:36:43.705258 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:36:43.724461 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:36:43.744809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:36:43.756344 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 23:36:43.767395 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:36:43.767810 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:36:43.783894 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:36:43.796963 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 6 23:36:43.797890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:36:43.798241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:36:43.798711 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:36:43.799311 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:36:43.799773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:36:43.800351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:36:43.800900 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:36:43.801278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:36:43.807350 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:36:43.808350 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:36:43.823240 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 6 23:36:43.832433 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:36:43.839696 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 6 23:36:43.842918 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Nov 6 23:36:43.843593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:36:43.844014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:36:43.854517 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:36:43.854879 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:36:43.866554 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:36:43.866667 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:36:43.869834 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:36:43.914858 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:36:43.931155 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 6 23:36:43.969330 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Nov 6 23:36:43.983015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:36:43.996089 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:36:44.135231 systemd-networkd[1382]: lo: Link UP Nov 6 23:36:44.136026 systemd-networkd[1382]: lo: Gained carrier Nov 6 23:36:44.139475 systemd-networkd[1382]: Enumeration completed Nov 6 23:36:44.140556 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:36:44.140708 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:36:44.141408 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:36:44.141515 systemd-resolved[1383]: Positive Trust Anchors: Nov 6 23:36:44.141542 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:36:44.141608 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:36:44.143429 systemd-networkd[1382]: eth0: Link UP Nov 6 23:36:44.143444 systemd-networkd[1382]: eth0: Gained carrier Nov 6 23:36:44.143480 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:36:44.150321 systemd-resolved[1383]: Defaulting to hostname 'linux'. Nov 6 23:36:44.153387 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:36:44.155265 systemd-networkd[1382]: eth0: Overlong DHCP hostname received, shortened from 'ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf.c.flatcar-212911.internal' to 'ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf' Nov 6 23:36:44.155289 systemd-networkd[1382]: eth0: DHCPv4 address 10.128.0.22/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 6 23:36:44.165420 systemd[1]: Reached target network.target - Network. Nov 6 23:36:44.175307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:36:44.190342 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:36:44.201436 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:36:44.214393 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:36:44.228537 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:36:44.239492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:36:44.251321 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:36:44.263407 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:36:44.263487 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:36:44.274298 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:36:44.287000 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:36:44.302074 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:36:44.315417 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:36:44.330888 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:36:44.343491 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:36:44.367431 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:36:44.382929 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:36:44.403431 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:36:44.429679 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:36:44.446228 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:36:44.458893 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:36:44.470290 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:36:44.479373 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:36:44.479525 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:36:44.488262 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:36:44.509431 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 23:36:44.522567 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:36:44.562134 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:36:44.586340 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:36:44.600387 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:36:44.604328 jq[1435]: false Nov 6 23:36:44.611024 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:36:44.611924 coreos-metadata[1433]: Nov 06 23:36:44.611 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Nov 6 23:36:44.613321 coreos-metadata[1433]: Nov 06 23:36:44.613 INFO Fetch successful Nov 6 23:36:44.613321 coreos-metadata[1433]: Nov 06 23:36:44.613 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Nov 6 23:36:44.616123 coreos-metadata[1433]: Nov 06 23:36:44.614 INFO Fetch successful Nov 6 23:36:44.616511 coreos-metadata[1433]: Nov 06 23:36:44.616 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Nov 6 23:36:44.619386 coreos-metadata[1433]: Nov 06 23:36:44.619 INFO Fetch successful Nov 6 23:36:44.619613 coreos-metadata[1433]: Nov 06 23:36:44.619 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Nov 6 23:36:44.623041 coreos-metadata[1433]: Nov 06 23:36:44.621 INFO Fetch successful Nov 6 23:36:44.632261 systemd[1]: Started ntpd.service - Network Time Service. Nov 6 23:36:44.649774 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:36:44.665964 extend-filesystems[1438]: Found loop4 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found loop5 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found loop6 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found loop7 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found sda Nov 6 23:36:44.691782 extend-filesystems[1438]: Found sda1 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found sda2 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found sda3 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found usr Nov 6 23:36:44.691782 extend-filesystems[1438]: Found sda4 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found sda6 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found sda7 Nov 6 23:36:44.691782 extend-filesystems[1438]: Found sda9 Nov 6 23:36:44.691782 extend-filesystems[1438]: Checking size of /dev/sda9 Nov 6 23:36:44.864533 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Nov 6 23:36:44.864591 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1288) Nov 6 23:36:44.668373 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: ntpd 4.2.8p17@1.4004-o Thu Nov 6 21:31:25 UTC 2025 (1): Starting Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: ---------------------------------------------------- Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: ntp-4 is maintained by Network Time Foundation, Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: corporation. Support and training for ntp-4 are Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: available at https://www.nwtime.org/support Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: ---------------------------------------------------- Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: proto: precision = 0.075 usec (-24) Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: basedate set to 2025-10-25 Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: gps base set to 2025-10-26 (week 2390) Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: Listen normally on 3 eth0 10.128.0.22:123 Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: Listen normally on 4 lo [::1]:123 Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: bind(21) AF_INET6 fe80::4001:aff:fe80:16%2#123 flags 0x11 failed: Cannot assign requested address Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:16%2#123 Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: failed to init interface for address fe80::4001:aff:fe80:16%2 Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: Listening on routing socket on fd #21 for interface updates Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 23:36:44.869238 ntpd[1441]: 6 Nov 23:36:44 ntpd[1441]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 23:36:44.874659 extend-filesystems[1438]: Resized partition /dev/sda9 Nov 6 23:36:44.711421 dbus-daemon[1434]: [system] SELinux support is enabled Nov 6 23:36:44.691344 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:36:44.910034 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Nov 6 23:36:44.719320 dbus-daemon[1434]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1382 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 6 23:36:44.764819 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:36:44.769928 ntpd[1441]: ntpd 4.2.8p17@1.4004-o Thu Nov 6 21:31:25 UTC 2025 (1): Starting Nov 6 23:36:44.785028 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 6 23:36:44.769965 ntpd[1441]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 23:36:44.787285 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:36:44.946834 update_engine[1460]: I20251106 23:36:44.917603 1460 main.cc:92] Flatcar Update Engine starting Nov 6 23:36:44.946834 update_engine[1460]: I20251106 23:36:44.922712 1460 update_check_scheduler.cc:74] Next update check in 4m53s Nov 6 23:36:44.769981 ntpd[1441]: ---------------------------------------------------- Nov 6 23:36:44.796505 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:36:44.769996 ntpd[1441]: ntp-4 is maintained by Network Time Foundation, Nov 6 23:36:44.847254 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:36:44.954405 jq[1466]: true Nov 6 23:36:44.770013 ntpd[1441]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 23:36:44.865876 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:36:44.770027 ntpd[1441]: corporation. Support and training for ntp-4 are Nov 6 23:36:44.904666 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:36:44.770065 ntpd[1441]: available at https://www.nwtime.org/support Nov 6 23:36:44.941833 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:36:44.773641 ntpd[1441]: ---------------------------------------------------- Nov 6 23:36:44.943273 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:36:44.783817 ntpd[1441]: proto: precision = 0.075 usec (-24) Nov 6 23:36:44.945115 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:36:44.789228 ntpd[1441]: basedate set to 2025-10-25 Nov 6 23:36:44.945531 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:36:44.789259 ntpd[1441]: gps base set to 2025-10-26 (week 2390) Nov 6 23:36:44.804601 ntpd[1441]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 23:36:44.804696 ntpd[1441]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 23:36:44.810526 ntpd[1441]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 23:36:44.810620 ntpd[1441]: Listen normally on 3 eth0 10.128.0.22:123 Nov 6 23:36:44.810689 ntpd[1441]: Listen normally on 4 lo [::1]:123 Nov 6 23:36:44.810792 ntpd[1441]: bind(21) AF_INET6 fe80::4001:aff:fe80:16%2#123 flags 0x11 failed: Cannot assign requested address Nov 6 23:36:44.810822 ntpd[1441]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:16%2#123 Nov 6 23:36:44.810843 ntpd[1441]: failed to init interface for address fe80::4001:aff:fe80:16%2 Nov 6 23:36:44.810901 ntpd[1441]: Listening on routing socket on fd #21 for interface updates Nov 6 23:36:44.819469 ntpd[1441]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 23:36:44.819523 ntpd[1441]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 23:36:44.972091 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Nov 6 23:36:44.968762 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:36:44.970137 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:36:45.010086 extend-filesystems[1456]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 6 23:36:45.010086 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 6 23:36:45.010086 extend-filesystems[1456]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Nov 6 23:36:45.049527 extend-filesystems[1438]: Resized filesystem in /dev/sda9 Nov 6 23:36:45.011911 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 23:36:45.015932 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 23:36:45.022944 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 23:36:45.022988 systemd-logind[1459]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 6 23:36:45.023022 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 23:36:45.023464 systemd-logind[1459]: New seat seat0. Nov 6 23:36:45.074466 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:36:45.102078 jq[1470]: true Nov 6 23:36:45.099176 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:36:45.154322 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 23:36:45.159238 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 23:36:45.200879 tar[1469]: linux-amd64/LICENSE Nov 6 23:36:45.200879 tar[1469]: linux-amd64/helm Nov 6 23:36:45.227088 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 23:36:45.238864 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:36:45.254504 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:36:45.254832 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:36:45.256741 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:36:45.281453 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 6 23:36:45.293828 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:36:45.294161 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:36:45.310346 systemd-networkd[1382]: eth0: Gained IPv6LL Nov 6 23:36:45.321349 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:36:45.345133 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:36:45.349407 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:36:45.359101 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 23:36:45.378891 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:36:45.400560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:36:45.422601 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:36:45.443515 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Nov 6 23:36:45.466436 systemd[1]: Starting sshkeys.service... Nov 6 23:36:45.467717 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 23:36:45.514378 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 23:36:45.567879 init.sh[1516]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 6 23:36:45.571168 init.sh[1516]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 6 23:36:45.579210 init.sh[1516]: + /usr/bin/google_instance_setup Nov 6 23:36:45.577169 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 23:36:45.605620 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 23:36:45.642911 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 23:36:45.644935 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 23:36:45.668692 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 23:36:45.728168 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 23:36:45.764693 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 6 23:36:45.773176 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 6 23:36:45.776248 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 23:36:45.773768 dbus-daemon[1434]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1508 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 6 23:36:45.809700 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 23:36:45.827447 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 23:36:45.832931 systemd[1]: Starting polkit.service - Authorization Manager... Nov 6 23:36:45.850848 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 23:36:45.862102 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 23:36:45.878035 coreos-metadata[1532]: Nov 06 23:36:45.877 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 6 23:36:45.888512 coreos-metadata[1532]: Nov 06 23:36:45.880 INFO Fetch failed with 404: resource not found Nov 6 23:36:45.888512 coreos-metadata[1532]: Nov 06 23:36:45.880 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 6 23:36:45.888512 coreos-metadata[1532]: Nov 06 23:36:45.881 INFO Fetch successful Nov 6 23:36:45.888512 coreos-metadata[1532]: Nov 06 23:36:45.881 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 6 23:36:45.888512 coreos-metadata[1532]: Nov 06 23:36:45.882 INFO Fetch failed with 404: resource not found Nov 6 23:36:45.888512 coreos-metadata[1532]: Nov 06 23:36:45.882 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 6 23:36:45.888512 coreos-metadata[1532]: Nov 06 23:36:45.884 INFO Fetch failed with 404: resource not found Nov 6 23:36:45.888512 coreos-metadata[1532]: Nov 06 23:36:45.885 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 6 23:36:45.893360 coreos-metadata[1532]: Nov 06 23:36:45.891 INFO Fetch successful Nov 6 23:36:45.903776 unknown[1532]: wrote ssh authorized keys file for user: core Nov 6 23:36:45.995516 update-ssh-keys[1551]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:36:45.997572 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 23:36:46.005641 polkitd[1548]: Started polkitd version 121 Nov 6 23:36:46.022115 systemd[1]: Finished sshkeys.service. Nov 6 23:36:46.045423 polkitd[1548]: Loading rules from directory /etc/polkit-1/rules.d Nov 6 23:36:46.045551 polkitd[1548]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 6 23:36:46.051564 polkitd[1548]: Finished loading, compiling and executing 2 rules Nov 6 23:36:46.054882 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 6 23:36:46.055587 systemd[1]: Started polkit.service - Authorization Manager. Nov 6 23:36:46.056623 polkitd[1548]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 6 23:36:46.113486 systemd-hostnamed[1508]: Hostname set to (transient) Nov 6 23:36:46.116522 systemd-resolved[1383]: System hostname changed to 'ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf'. Nov 6 23:36:46.156163 containerd[1471]: time="2025-11-06T23:36:46.155747098Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 6 23:36:46.255355 containerd[1471]: time="2025-11-06T23:36:46.254581644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:36:46.261414 containerd[1471]: time="2025-11-06T23:36:46.261311495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:36:46.261414 containerd[1471]: time="2025-11-06T23:36:46.261399763Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 6 23:36:46.262758 containerd[1471]: time="2025-11-06T23:36:46.262411172Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 6 23:36:46.262758 containerd[1471]: time="2025-11-06T23:36:46.262744900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 6 23:36:46.265216 containerd[1471]: time="2025-11-06T23:36:46.262798037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 6 23:36:46.265216 containerd[1471]: time="2025-11-06T23:36:46.262970127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:36:46.265216 containerd[1471]: time="2025-11-06T23:36:46.262999462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:36:46.265216 containerd[1471]: time="2025-11-06T23:36:46.264562307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:36:46.265216 containerd[1471]: time="2025-11-06T23:36:46.265012113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 6 23:36:46.265216 containerd[1471]: time="2025-11-06T23:36:46.265124216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:36:46.265216 containerd[1471]: time="2025-11-06T23:36:46.265147300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 6 23:36:46.266853 containerd[1471]: time="2025-11-06T23:36:46.266796724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:36:46.268759 containerd[1471]: time="2025-11-06T23:36:46.267334335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:36:46.268759 containerd[1471]: time="2025-11-06T23:36:46.268081290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:36:46.268759 containerd[1471]: time="2025-11-06T23:36:46.268115237Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 6 23:36:46.268759 containerd[1471]: time="2025-11-06T23:36:46.268475183Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 6 23:36:46.269679 containerd[1471]: time="2025-11-06T23:36:46.269198855Z" level=info msg="metadata content store policy set" policy=shared Nov 6 23:36:46.289435 containerd[1471]: time="2025-11-06T23:36:46.287308683Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 6 23:36:46.289435 containerd[1471]: time="2025-11-06T23:36:46.287419884Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 6 23:36:46.289435 containerd[1471]: time="2025-11-06T23:36:46.287448052Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 6 23:36:46.289435 containerd[1471]: time="2025-11-06T23:36:46.287595432Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 6 23:36:46.289435 containerd[1471]: time="2025-11-06T23:36:46.287625751Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 6 23:36:46.289435 containerd[1471]: time="2025-11-06T23:36:46.287873722Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 6 23:36:46.289435 containerd[1471]: time="2025-11-06T23:36:46.289350937Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289586720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289617981Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289644403Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289672266Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289696825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289719707Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289747063Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289772312Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289795303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289816997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 6 23:36:46.289874 containerd[1471]: time="2025-11-06T23:36:46.289849675Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 6 23:36:46.290356 containerd[1471]: time="2025-11-06T23:36:46.289884503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.290356 containerd[1471]: time="2025-11-06T23:36:46.289908793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.290356 containerd[1471]: time="2025-11-06T23:36:46.289930731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.290356 containerd[1471]: time="2025-11-06T23:36:46.289953972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.290356 containerd[1471]: time="2025-11-06T23:36:46.289989130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.290356 containerd[1471]: time="2025-11-06T23:36:46.290013369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.290034660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.290806971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.290851485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.290878958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.290898339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.290919192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.290938570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.290963882Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.291004396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.291030222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.291079027Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.291160131Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.291189775Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 6 23:36:46.291552 containerd[1471]: time="2025-11-06T23:36:46.291208968Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 6 23:36:46.292225 containerd[1471]: time="2025-11-06T23:36:46.291232042Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 6 23:36:46.292225 containerd[1471]: time="2025-11-06T23:36:46.291249083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.292225 containerd[1471]: time="2025-11-06T23:36:46.291270006Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 6 23:36:46.292225 containerd[1471]: time="2025-11-06T23:36:46.291285673Z" level=info msg="NRI interface is disabled by configuration." Nov 6 23:36:46.292225 containerd[1471]: time="2025-11-06T23:36:46.291301975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 6 23:36:46.294714 containerd[1471]: time="2025-11-06T23:36:46.293951912Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 6 23:36:46.294714 containerd[1471]: time="2025-11-06T23:36:46.294090211Z" level=info msg="Connect containerd service" Nov 6 23:36:46.294714 containerd[1471]: time="2025-11-06T23:36:46.294181169Z" level=info msg="using legacy CRI server" Nov 6 23:36:46.294714 containerd[1471]: time="2025-11-06T23:36:46.294195028Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 23:36:46.294714 containerd[1471]: time="2025-11-06T23:36:46.294475120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 6 23:36:46.299861 containerd[1471]: time="2025-11-06T23:36:46.299130677Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:36:46.299861 containerd[1471]: time="2025-11-06T23:36:46.299428639Z" level=info msg="Start subscribing containerd event" Nov 6 23:36:46.299861 containerd[1471]: time="2025-11-06T23:36:46.299519265Z" level=info msg="Start recovering state" Nov 6 23:36:46.299861 containerd[1471]: time="2025-11-06T23:36:46.299627704Z" level=info msg="Start event monitor" Nov 6 23:36:46.299861 containerd[1471]: time="2025-11-06T23:36:46.299654804Z" level=info msg="Start snapshots syncer" Nov 6 23:36:46.299861 containerd[1471]: time="2025-11-06T23:36:46.299670494Z" level=info msg="Start cni network conf syncer for default" Nov 6 23:36:46.299861 containerd[1471]: time="2025-11-06T23:36:46.299683573Z" level=info msg="Start streaming server" Nov 6 23:36:46.302550 containerd[1471]: time="2025-11-06T23:36:46.302139990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 23:36:46.302550 containerd[1471]: time="2025-11-06T23:36:46.302346635Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 23:36:46.303336 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 23:36:46.305754 containerd[1471]: time="2025-11-06T23:36:46.304919559Z" level=info msg="containerd successfully booted in 0.152162s" Nov 6 23:36:46.474778 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:36:46.496600 systemd[1]: Started sshd@0-10.128.0.22:22-139.178.89.65:35272.service - OpenSSH per-connection server daemon (139.178.89.65:35272). Nov 6 23:36:46.763022 tar[1469]: linux-amd64/README.md Nov 6 23:36:46.799344 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:36:46.855115 instance-setup[1529]: INFO Running google_set_multiqueue. Nov 6 23:36:46.879883 instance-setup[1529]: INFO Set channels for eth0 to 2. Nov 6 23:36:46.885850 instance-setup[1529]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Nov 6 23:36:46.889288 instance-setup[1529]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Nov 6 23:36:46.889394 instance-setup[1529]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Nov 6 23:36:46.892169 instance-setup[1529]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Nov 6 23:36:46.892257 instance-setup[1529]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Nov 6 23:36:46.895124 instance-setup[1529]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Nov 6 23:36:46.895200 instance-setup[1529]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Nov 6 23:36:46.898261 instance-setup[1529]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Nov 6 23:36:46.911388 instance-setup[1529]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 6 23:36:46.917496 instance-setup[1529]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 6 23:36:46.920300 instance-setup[1529]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 6 23:36:46.920368 instance-setup[1529]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 6 23:36:46.950993 init.sh[1516]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 6 23:36:46.990846 sshd[1567]: Accepted publickey for core from 139.178.89.65 port 35272 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:36:46.999464 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:36:47.023103 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 23:36:47.047205 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 23:36:47.093159 systemd-logind[1459]: New session 1 of user core. Nov 6 23:36:47.118941 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 23:36:47.145349 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 23:36:47.188499 (systemd)[1607]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 23:36:47.196328 systemd-logind[1459]: New session c1 of user core. Nov 6 23:36:47.222187 startup-script[1602]: INFO Starting startup scripts. Nov 6 23:36:47.231625 startup-script[1602]: INFO No startup scripts found in metadata. Nov 6 23:36:47.231686 startup-script[1602]: INFO Finished running startup scripts. Nov 6 23:36:47.270218 init.sh[1516]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 6 23:36:47.270218 init.sh[1516]: + daemon_pids=() Nov 6 23:36:47.270218 init.sh[1516]: + for d in accounts clock_skew network Nov 6 23:36:47.270461 init.sh[1516]: + daemon_pids+=($!) Nov 6 23:36:47.270461 init.sh[1516]: + for d in accounts clock_skew network Nov 6 23:36:47.272523 init.sh[1516]: + daemon_pids+=($!) Nov 6 23:36:47.272523 init.sh[1516]: + for d in accounts clock_skew network Nov 6 23:36:47.272715 init.sh[1613]: + /usr/bin/google_accounts_daemon Nov 6 23:36:47.273099 init.sh[1614]: + /usr/bin/google_clock_skew_daemon Nov 6 23:36:47.273392 init.sh[1615]: + /usr/bin/google_network_daemon Nov 6 23:36:47.273692 init.sh[1516]: + daemon_pids+=($!) Nov 6 23:36:47.273692 init.sh[1516]: + NOTIFY_SOCKET=/run/systemd/notify Nov 6 23:36:47.273776 init.sh[1516]: + /usr/bin/systemd-notify --ready Nov 6 23:36:47.286095 systemd[1]: Started oem-gce.service - GCE Linux Agent. Nov 6 23:36:47.315169 init.sh[1516]: + wait -n 1613 1614 1615 Nov 6 23:36:47.649040 systemd[1607]: Queued start job for default target default.target. Nov 6 23:36:47.656152 systemd[1607]: Created slice app.slice - User Application Slice. Nov 6 23:36:47.656211 systemd[1607]: Reached target paths.target - Paths. Nov 6 23:36:47.656435 systemd[1607]: Reached target timers.target - Timers. Nov 6 23:36:47.668363 systemd[1607]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 23:36:47.707534 systemd[1607]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 23:36:47.707787 systemd[1607]: Reached target sockets.target - Sockets. Nov 6 23:36:47.707887 systemd[1607]: Reached target basic.target - Basic System. Nov 6 23:36:47.707969 systemd[1607]: Reached target default.target - Main User Target. Nov 6 23:36:47.708025 systemd[1607]: Startup finished in 496ms. Nov 6 23:36:47.708345 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 23:36:47.728595 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 23:36:47.775124 ntpd[1441]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:16%2]:123 Nov 6 23:36:47.775581 ntpd[1441]: 6 Nov 23:36:47 ntpd[1441]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:16%2]:123 Nov 6 23:36:47.823780 google-networking[1615]: INFO Starting Google Networking daemon. Nov 6 23:36:48.001842 systemd[1]: Started sshd@1-10.128.0.22:22-139.178.89.65:35276.service - OpenSSH per-connection server daemon (139.178.89.65:35276). Nov 6 23:36:48.012584 google-clock-skew[1614]: INFO Starting Google Clock Skew daemon. Nov 6 23:36:48.020899 google-clock-skew[1614]: INFO Clock drift token has changed: 0. Nov 6 23:36:48.094025 groupadd[1631]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 6 23:36:48.101791 groupadd[1631]: group added to /etc/gshadow: name=google-sudoers Nov 6 23:36:48.186817 groupadd[1631]: new group: name=google-sudoers, GID=1000 Nov 6 23:36:48.226528 google-accounts[1613]: INFO Starting Google Accounts daemon. Nov 6 23:36:48.243448 google-accounts[1613]: WARNING OS Login not installed. Nov 6 23:36:48.247613 google-accounts[1613]: INFO Creating a new user account for 0. Nov 6 23:36:48.258257 init.sh[1640]: useradd: invalid user name '0': use --badname to ignore Nov 6 23:36:48.259300 google-accounts[1613]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 6 23:36:48.340239 sshd[1629]: Accepted publickey for core from 139.178.89.65 port 35276 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:36:48.342891 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:36:48.352239 systemd-logind[1459]: New session 2 of user core. Nov 6 23:36:48.358396 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 23:36:48.564419 sshd[1642]: Connection closed by 139.178.89.65 port 35276 Nov 6 23:36:48.565842 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Nov 6 23:36:48.570877 systemd[1]: sshd@1-10.128.0.22:22-139.178.89.65:35276.service: Deactivated successfully. Nov 6 23:36:48.573713 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 23:36:48.577024 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Nov 6 23:36:48.579512 systemd-logind[1459]: Removed session 2. Nov 6 23:36:48.631671 systemd[1]: Started sshd@2-10.128.0.22:22-139.178.89.65:35288.service - OpenSSH per-connection server daemon (139.178.89.65:35288). Nov 6 23:36:48.649355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:36:48.665476 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 23:36:48.673031 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:36:48.679122 systemd[1]: Startup finished in 1.815s (kernel) + 9.395s (initrd) + 10.479s (userspace) = 21.691s. Nov 6 23:36:49.000857 systemd-resolved[1383]: Clock change detected. Flushing caches. Nov 6 23:36:49.006628 google-clock-skew[1614]: INFO Synced system time with hardware clock. Nov 6 23:36:49.256196 sshd[1652]: Accepted publickey for core from 139.178.89.65 port 35288 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:36:49.260457 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:36:49.270529 systemd-logind[1459]: New session 3 of user core. Nov 6 23:36:49.277798 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 23:36:49.472767 sshd[1664]: Connection closed by 139.178.89.65 port 35288 Nov 6 23:36:49.473801 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Nov 6 23:36:49.479084 systemd[1]: sshd@2-10.128.0.22:22-139.178.89.65:35288.service: Deactivated successfully. Nov 6 23:36:49.482538 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 23:36:49.485256 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Nov 6 23:36:49.487002 systemd-logind[1459]: Removed session 3. Nov 6 23:36:50.255864 kubelet[1653]: E1106 23:36:50.255791 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:36:50.258509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:36:50.258792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:36:50.259453 systemd[1]: kubelet.service: Consumed 1.504s CPU time, 268.8M memory peak. Nov 6 23:36:59.536967 systemd[1]: Started sshd@3-10.128.0.22:22-139.178.89.65:40762.service - OpenSSH per-connection server daemon (139.178.89.65:40762). Nov 6 23:36:59.846043 sshd[1672]: Accepted publickey for core from 139.178.89.65 port 40762 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:36:59.849046 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:36:59.857017 systemd-logind[1459]: New session 4 of user core. Nov 6 23:36:59.866863 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 23:37:00.062736 sshd[1674]: Connection closed by 139.178.89.65 port 40762 Nov 6 23:37:00.064083 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:00.069993 systemd[1]: sshd@3-10.128.0.22:22-139.178.89.65:40762.service: Deactivated successfully. Nov 6 23:37:00.072782 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 23:37:00.073908 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Nov 6 23:37:00.075929 systemd-logind[1459]: Removed session 4. Nov 6 23:37:00.124028 systemd[1]: Started sshd@4-10.128.0.22:22-139.178.89.65:40764.service - OpenSSH per-connection server daemon (139.178.89.65:40764). Nov 6 23:37:00.364932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 23:37:00.376162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:00.427687 sshd[1680]: Accepted publickey for core from 139.178.89.65 port 40764 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:37:00.429871 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:00.438725 systemd-logind[1459]: New session 5 of user core. Nov 6 23:37:00.450804 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 23:37:00.640276 sshd[1685]: Connection closed by 139.178.89.65 port 40764 Nov 6 23:37:00.641056 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:00.648566 systemd[1]: sshd@4-10.128.0.22:22-139.178.89.65:40764.service: Deactivated successfully. Nov 6 23:37:00.651712 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 23:37:00.653953 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Nov 6 23:37:00.656369 systemd-logind[1459]: Removed session 5. Nov 6 23:37:00.706167 systemd[1]: Started sshd@5-10.128.0.22:22-139.178.89.65:40768.service - OpenSSH per-connection server daemon (139.178.89.65:40768). Nov 6 23:37:00.758534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:00.776360 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:37:00.857174 kubelet[1698]: E1106 23:37:00.857108 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:37:00.863048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:37:00.863307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:37:00.864715 systemd[1]: kubelet.service: Consumed 262ms CPU time, 111.8M memory peak. Nov 6 23:37:01.032832 sshd[1691]: Accepted publickey for core from 139.178.89.65 port 40768 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:37:01.035018 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:01.044315 systemd-logind[1459]: New session 6 of user core. Nov 6 23:37:01.055883 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 23:37:01.256758 sshd[1705]: Connection closed by 139.178.89.65 port 40768 Nov 6 23:37:01.257840 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:01.263628 systemd[1]: sshd@5-10.128.0.22:22-139.178.89.65:40768.service: Deactivated successfully. Nov 6 23:37:01.266368 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 23:37:01.267658 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Nov 6 23:37:01.270602 systemd-logind[1459]: Removed session 6. Nov 6 23:37:01.315047 systemd[1]: Started sshd@6-10.128.0.22:22-139.178.89.65:40776.service - OpenSSH per-connection server daemon (139.178.89.65:40776). Nov 6 23:37:01.627150 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 40776 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:37:01.629638 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:01.638916 systemd-logind[1459]: New session 7 of user core. Nov 6 23:37:01.648860 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 23:37:01.837169 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 23:37:01.837991 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:37:01.857936 sudo[1714]: pam_unix(sudo:session): session closed for user root Nov 6 23:37:01.900880 sshd[1713]: Connection closed by 139.178.89.65 port 40776 Nov 6 23:37:01.902113 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:01.909358 systemd[1]: sshd@6-10.128.0.22:22-139.178.89.65:40776.service: Deactivated successfully. Nov 6 23:37:01.912176 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 23:37:01.913936 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Nov 6 23:37:01.916719 systemd-logind[1459]: Removed session 7. Nov 6 23:37:01.964973 systemd[1]: Started sshd@7-10.128.0.22:22-139.178.89.65:40782.service - OpenSSH per-connection server daemon (139.178.89.65:40782). Nov 6 23:37:02.264163 sshd[1720]: Accepted publickey for core from 139.178.89.65 port 40782 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:37:02.266327 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:02.277088 systemd-logind[1459]: New session 8 of user core. Nov 6 23:37:02.287857 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 23:37:02.449918 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 23:37:02.450484 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:37:02.458351 sudo[1724]: pam_unix(sudo:session): session closed for user root Nov 6 23:37:02.475919 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 23:37:02.476490 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:37:02.496428 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:37:02.554226 augenrules[1746]: No rules Nov 6 23:37:02.557146 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:37:02.557628 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:37:02.559690 sudo[1723]: pam_unix(sudo:session): session closed for user root Nov 6 23:37:02.603630 sshd[1722]: Connection closed by 139.178.89.65 port 40782 Nov 6 23:37:02.604510 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:02.609591 systemd[1]: sshd@7-10.128.0.22:22-139.178.89.65:40782.service: Deactivated successfully. Nov 6 23:37:02.613086 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 23:37:02.615425 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Nov 6 23:37:02.617950 systemd-logind[1459]: Removed session 8. Nov 6 23:37:02.661313 systemd[1]: Started sshd@8-10.128.0.22:22-139.178.89.65:40792.service - OpenSSH per-connection server daemon (139.178.89.65:40792). Nov 6 23:37:02.972778 sshd[1755]: Accepted publickey for core from 139.178.89.65 port 40792 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:37:02.975013 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:37:02.982857 systemd-logind[1459]: New session 9 of user core. Nov 6 23:37:02.990860 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 23:37:03.156095 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 23:37:03.156707 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:37:03.755126 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 23:37:03.758153 (dockerd)[1775]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 23:37:04.356942 dockerd[1775]: time="2025-11-06T23:37:04.356775742Z" level=info msg="Starting up" Nov 6 23:37:04.618773 dockerd[1775]: time="2025-11-06T23:37:04.617844166Z" level=info msg="Loading containers: start." Nov 6 23:37:05.026833 kernel: Initializing XFRM netlink socket Nov 6 23:37:05.229386 systemd-networkd[1382]: docker0: Link UP Nov 6 23:37:05.296545 dockerd[1775]: time="2025-11-06T23:37:05.296140713Z" level=info msg="Loading containers: done." Nov 6 23:37:05.338986 dockerd[1775]: time="2025-11-06T23:37:05.338774940Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 23:37:05.339290 dockerd[1775]: time="2025-11-06T23:37:05.339057656Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Nov 6 23:37:05.339290 dockerd[1775]: time="2025-11-06T23:37:05.339236088Z" level=info msg="Daemon has completed initialization" Nov 6 23:37:05.420849 dockerd[1775]: time="2025-11-06T23:37:05.420015536Z" level=info msg="API listen on /run/docker.sock" Nov 6 23:37:05.420158 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 23:37:06.759439 containerd[1471]: time="2025-11-06T23:37:06.759381296Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 23:37:07.306059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386274628.mount: Deactivated successfully. Nov 6 23:37:09.486353 containerd[1471]: time="2025-11-06T23:37:09.486265220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:09.489382 containerd[1471]: time="2025-11-06T23:37:09.489291642Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30122476" Nov 6 23:37:09.492206 containerd[1471]: time="2025-11-06T23:37:09.492110584Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:09.500272 containerd[1471]: time="2025-11-06T23:37:09.498814321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:09.503547 containerd[1471]: time="2025-11-06T23:37:09.502392425Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.742951466s" Nov 6 23:37:09.503547 containerd[1471]: time="2025-11-06T23:37:09.502656925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 23:37:09.503827 containerd[1471]: time="2025-11-06T23:37:09.503668688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 23:37:10.895364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 23:37:10.905452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:11.275879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:11.285651 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:37:11.390424 kubelet[2032]: E1106 23:37:11.390213 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:37:11.396966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:37:11.397326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:37:11.399694 systemd[1]: kubelet.service: Consumed 282ms CPU time, 110.3M memory peak. Nov 6 23:37:11.739235 containerd[1471]: time="2025-11-06T23:37:11.739156557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:11.743761 containerd[1471]: time="2025-11-06T23:37:11.743670810Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26022778" Nov 6 23:37:11.747520 containerd[1471]: time="2025-11-06T23:37:11.747388452Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:11.761517 containerd[1471]: time="2025-11-06T23:37:11.760089600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:11.762736 containerd[1471]: time="2025-11-06T23:37:11.762669289Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.258959675s" Nov 6 23:37:11.762736 containerd[1471]: time="2025-11-06T23:37:11.762721750Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 23:37:11.765175 containerd[1471]: time="2025-11-06T23:37:11.764345659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 23:37:13.481246 containerd[1471]: time="2025-11-06T23:37:13.481159484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:13.483201 containerd[1471]: time="2025-11-06T23:37:13.483133579Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20157484" Nov 6 23:37:13.484736 containerd[1471]: time="2025-11-06T23:37:13.484693195Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:13.492683 containerd[1471]: time="2025-11-06T23:37:13.492583342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:13.495522 containerd[1471]: time="2025-11-06T23:37:13.494644999Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.729988872s" Nov 6 23:37:13.495522 containerd[1471]: time="2025-11-06T23:37:13.494701714Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 23:37:13.496234 containerd[1471]: time="2025-11-06T23:37:13.496185996Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 23:37:14.771153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759759717.mount: Deactivated successfully. Nov 6 23:37:15.635198 containerd[1471]: time="2025-11-06T23:37:15.635113742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:15.636905 containerd[1471]: time="2025-11-06T23:37:15.636763831Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31931364" Nov 6 23:37:15.638912 containerd[1471]: time="2025-11-06T23:37:15.638831602Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:15.643572 containerd[1471]: time="2025-11-06T23:37:15.642392666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:15.643572 containerd[1471]: time="2025-11-06T23:37:15.643340551Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.147108832s" Nov 6 23:37:15.643572 containerd[1471]: time="2025-11-06T23:37:15.643408047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 23:37:15.644550 containerd[1471]: time="2025-11-06T23:37:15.644394126Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 23:37:16.140084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952541400.mount: Deactivated successfully. Nov 6 23:37:16.448139 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 6 23:37:17.901784 containerd[1471]: time="2025-11-06T23:37:17.901700943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:17.904049 containerd[1471]: time="2025-11-06T23:37:17.903694019Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20948880" Nov 6 23:37:17.907214 containerd[1471]: time="2025-11-06T23:37:17.906110841Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:17.911529 containerd[1471]: time="2025-11-06T23:37:17.911451327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:17.913485 containerd[1471]: time="2025-11-06T23:37:17.913416916Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.268852887s" Nov 6 23:37:17.913677 containerd[1471]: time="2025-11-06T23:37:17.913651064Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 23:37:17.914395 containerd[1471]: time="2025-11-06T23:37:17.914362852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 23:37:18.391692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876175545.mount: Deactivated successfully. Nov 6 23:37:18.403214 containerd[1471]: time="2025-11-06T23:37:18.403139387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:18.404704 containerd[1471]: time="2025-11-06T23:37:18.404423851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Nov 6 23:37:18.407493 containerd[1471]: time="2025-11-06T23:37:18.406179624Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:18.411050 containerd[1471]: time="2025-11-06T23:37:18.410988342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:18.412337 containerd[1471]: time="2025-11-06T23:37:18.412276915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.876972ms" Nov 6 23:37:18.412337 containerd[1471]: time="2025-11-06T23:37:18.412321398Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 23:37:18.413301 containerd[1471]: time="2025-11-06T23:37:18.413267115Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 23:37:18.883723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748072499.mount: Deactivated successfully. Nov 6 23:37:21.386845 containerd[1471]: time="2025-11-06T23:37:21.386765022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:21.388541 containerd[1471]: time="2025-11-06T23:37:21.388418973Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58384071" Nov 6 23:37:21.390409 containerd[1471]: time="2025-11-06T23:37:21.390333500Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:21.396502 containerd[1471]: time="2025-11-06T23:37:21.395744178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:21.402622 containerd[1471]: time="2025-11-06T23:37:21.402565562Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.989253854s" Nov 6 23:37:21.402847 containerd[1471]: time="2025-11-06T23:37:21.402824175Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 23:37:21.645333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 23:37:21.655223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:21.978797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:21.987041 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:37:22.061872 kubelet[2194]: E1106 23:37:22.061793 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:37:22.066849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:37:22.067099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:37:22.067893 systemd[1]: kubelet.service: Consumed 229ms CPU time, 108.1M memory peak. Nov 6 23:37:25.840417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:25.840771 systemd[1]: kubelet.service: Consumed 229ms CPU time, 108.1M memory peak. Nov 6 23:37:25.848968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:25.899046 systemd[1]: Reload requested from client PID 2208 ('systemctl') (unit session-9.scope)... Nov 6 23:37:25.899073 systemd[1]: Reloading... Nov 6 23:37:26.105551 zram_generator::config[2253]: No configuration found. Nov 6 23:37:26.277179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:37:26.426380 systemd[1]: Reloading finished in 526 ms. Nov 6 23:37:26.502789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:26.516268 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:37:26.521425 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:26.522581 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:37:26.522960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:26.523045 systemd[1]: kubelet.service: Consumed 166ms CPU time, 99.3M memory peak. Nov 6 23:37:26.541313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:26.854750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:26.869302 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:37:26.946511 kubelet[2306]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:37:26.946511 kubelet[2306]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:37:26.946511 kubelet[2306]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:37:26.946511 kubelet[2306]: I1106 23:37:26.944483 2306 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:37:28.043387 kubelet[2306]: I1106 23:37:28.043322 2306 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 23:37:28.043387 kubelet[2306]: I1106 23:37:28.043359 2306 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:37:28.044045 kubelet[2306]: I1106 23:37:28.043714 2306 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:37:28.086332 kubelet[2306]: I1106 23:37:28.086284 2306 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:37:28.086815 kubelet[2306]: E1106 23:37:28.086737 2306 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 23:37:28.106195 kubelet[2306]: E1106 23:37:28.106076 2306 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:37:28.106195 kubelet[2306]: I1106 23:37:28.106165 2306 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:37:28.111786 kubelet[2306]: I1106 23:37:28.111725 2306 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:37:28.112182 kubelet[2306]: I1106 23:37:28.112129 2306 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:37:28.112430 kubelet[2306]: I1106 23:37:28.112168 2306 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:37:28.112663 kubelet[2306]: I1106 23:37:28.112432 2306 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:37:28.112663 kubelet[2306]: I1106 23:37:28.112452 2306 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 23:37:28.114228 kubelet[2306]: I1106 23:37:28.114176 2306 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:37:28.118173 kubelet[2306]: I1106 23:37:28.118114 2306 kubelet.go:480] "Attempting to sync node with API server" Nov 6 23:37:28.118173 kubelet[2306]: I1106 23:37:28.118159 2306 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:37:28.118435 kubelet[2306]: I1106 23:37:28.118208 2306 kubelet.go:386] "Adding apiserver pod source" Nov 6 23:37:28.121875 kubelet[2306]: I1106 23:37:28.121214 2306 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:37:28.130897 kubelet[2306]: E1106 23:37:28.130846 2306 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf&limit=500&resourceVersion=0\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:37:28.131496 kubelet[2306]: E1106 23:37:28.131440 2306 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:37:28.132037 kubelet[2306]: I1106 23:37:28.132007 2306 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:37:28.132818 kubelet[2306]: I1106 23:37:28.132761 2306 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:37:28.135041 kubelet[2306]: W1106 23:37:28.134985 2306 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 23:37:28.152917 kubelet[2306]: I1106 23:37:28.152853 2306 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:37:28.153114 kubelet[2306]: I1106 23:37:28.152950 2306 server.go:1289] "Started kubelet" Nov 6 23:37:28.153494 kubelet[2306]: I1106 23:37:28.153256 2306 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:37:28.156337 kubelet[2306]: I1106 23:37:28.155355 2306 server.go:317] "Adding debug handlers to kubelet server" Nov 6 23:37:28.159819 kubelet[2306]: I1106 23:37:28.159719 2306 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:37:28.160290 kubelet[2306]: I1106 23:37:28.160262 2306 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:37:28.162808 kubelet[2306]: E1106 23:37:28.160512 2306 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf.18758f2bf5a32e78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,UID:ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,},FirstTimestamp:2025-11-06 23:37:28.152895096 +0000 UTC m=+1.275072476,LastTimestamp:2025-11-06 23:37:28.152895096 +0000 UTC m=+1.275072476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,}" Nov 6 23:37:28.168518 kubelet[2306]: I1106 23:37:28.165653 2306 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:37:28.170896 kubelet[2306]: I1106 23:37:28.166787 2306 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:37:28.170896 kubelet[2306]: I1106 23:37:28.170494 2306 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:37:28.171116 kubelet[2306]: I1106 23:37:28.171058 2306 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:37:28.171171 kubelet[2306]: I1106 23:37:28.171138 2306 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:37:28.172525 kubelet[2306]: E1106 23:37:28.172476 2306 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:37:28.173061 kubelet[2306]: I1106 23:37:28.173030 2306 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:37:28.173565 kubelet[2306]: E1106 23:37:28.173536 2306 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" Nov 6 23:37:28.176400 kubelet[2306]: E1106 23:37:28.176349 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf?timeout=10s\": dial tcp 10.128.0.22:6443: connect: connection refused" interval="200ms" Nov 6 23:37:28.176550 kubelet[2306]: E1106 23:37:28.176515 2306 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:37:28.176716 kubelet[2306]: I1106 23:37:28.176674 2306 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:37:28.176716 kubelet[2306]: I1106 23:37:28.176694 2306 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:37:28.210817 kubelet[2306]: I1106 23:37:28.210759 2306 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:37:28.210817 kubelet[2306]: I1106 23:37:28.210784 2306 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:37:28.210817 kubelet[2306]: I1106 23:37:28.210811 2306 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:37:28.215182 kubelet[2306]: I1106 23:37:28.214007 2306 policy_none.go:49] "None policy: Start" Nov 6 23:37:28.215182 kubelet[2306]: I1106 23:37:28.214051 2306 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:37:28.215182 kubelet[2306]: I1106 23:37:28.214077 2306 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:37:28.219929 kubelet[2306]: I1106 23:37:28.219875 2306 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 23:37:28.223642 kubelet[2306]: I1106 23:37:28.223606 2306 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 23:37:28.224423 kubelet[2306]: I1106 23:37:28.224395 2306 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 23:37:28.224655 kubelet[2306]: I1106 23:37:28.224639 2306 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:37:28.224856 kubelet[2306]: I1106 23:37:28.224841 2306 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 23:37:28.225050 kubelet[2306]: E1106 23:37:28.225020 2306 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:37:28.231570 kubelet[2306]: E1106 23:37:28.231515 2306 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:37:28.237571 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 23:37:28.246717 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 23:37:28.251765 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 23:37:28.262949 kubelet[2306]: E1106 23:37:28.262909 2306 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:37:28.263693 kubelet[2306]: I1106 23:37:28.263214 2306 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:37:28.263693 kubelet[2306]: I1106 23:37:28.263237 2306 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:37:28.263693 kubelet[2306]: I1106 23:37:28.263575 2306 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:37:28.265989 kubelet[2306]: E1106 23:37:28.265953 2306 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:37:28.266108 kubelet[2306]: E1106 23:37:28.266033 2306 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" Nov 6 23:37:28.384791 kubelet[2306]: I1106 23:37:28.369820 2306 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.384791 kubelet[2306]: E1106 23:37:28.370353 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.22:6443/api/v1/nodes\": dial tcp 10.128.0.22:6443: connect: connection refused" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.384791 kubelet[2306]: E1106 23:37:28.377819 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf?timeout=10s\": dial tcp 10.128.0.22:6443: connect: connection refused" interval="400ms" Nov 6 23:37:28.409099 systemd[1]: Created slice kubepods-burstable-pod53043d64241d37faf5fd6b64c4da617f.slice - libcontainer container kubepods-burstable-pod53043d64241d37faf5fd6b64c4da617f.slice. Nov 6 23:37:28.416978 kubelet[2306]: E1106 23:37:28.416930 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.471923 systemd[1]: Created slice kubepods-burstable-pod979bad06e29551459f4c28e08f09224f.slice - libcontainer container kubepods-burstable-pod979bad06e29551459f4c28e08f09224f.slice. Nov 6 23:37:28.473948 kubelet[2306]: I1106 23:37:28.473906 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-ca-certs\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.474108 kubelet[2306]: I1106 23:37:28.473970 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.474108 kubelet[2306]: I1106 23:37:28.473999 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53043d64241d37faf5fd6b64c4da617f-k8s-certs\") pod \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"53043d64241d37faf5fd6b64c4da617f\") " pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.474108 kubelet[2306]: I1106 23:37:28.474027 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6503d0f5a27e440d8eb16551706b4c68-kubeconfig\") pod \"kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"6503d0f5a27e440d8eb16551706b4c68\") " pod="kube-system/kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.474108 kubelet[2306]: I1106 23:37:28.474056 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.474340 kubelet[2306]: I1106 23:37:28.474108 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.474340 kubelet[2306]: I1106 23:37:28.474156 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.474340 kubelet[2306]: I1106 23:37:28.474187 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53043d64241d37faf5fd6b64c4da617f-ca-certs\") pod \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"53043d64241d37faf5fd6b64c4da617f\") " pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.474340 kubelet[2306]: I1106 23:37:28.474219 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53043d64241d37faf5fd6b64c4da617f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"53043d64241d37faf5fd6b64c4da617f\") " pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.476535 kubelet[2306]: E1106 23:37:28.476457 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.492108 systemd[1]: Created slice kubepods-burstable-pod6503d0f5a27e440d8eb16551706b4c68.slice - libcontainer container kubepods-burstable-pod6503d0f5a27e440d8eb16551706b4c68.slice. Nov 6 23:37:28.496314 kubelet[2306]: E1106 23:37:28.496271 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.577307 kubelet[2306]: I1106 23:37:28.577248 2306 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.577730 kubelet[2306]: E1106 23:37:28.577673 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.22:6443/api/v1/nodes\": dial tcp 10.128.0.22:6443: connect: connection refused" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.718988 containerd[1471]: time="2025-11-06T23:37:28.718930879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,Uid:53043d64241d37faf5fd6b64c4da617f,Namespace:kube-system,Attempt:0,}" Nov 6 23:37:28.778589 containerd[1471]: time="2025-11-06T23:37:28.778512477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,Uid:979bad06e29551459f4c28e08f09224f,Namespace:kube-system,Attempt:0,}" Nov 6 23:37:28.779267 kubelet[2306]: E1106 23:37:28.779168 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf?timeout=10s\": dial tcp 10.128.0.22:6443: connect: connection refused" interval="800ms" Nov 6 23:37:28.798284 containerd[1471]: time="2025-11-06T23:37:28.798209268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,Uid:6503d0f5a27e440d8eb16551706b4c68,Namespace:kube-system,Attempt:0,}" Nov 6 23:37:28.951395 kubelet[2306]: E1106 23:37:28.951322 2306 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf&limit=500&resourceVersion=0\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 23:37:28.982928 kubelet[2306]: I1106 23:37:28.982785 2306 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:28.983290 kubelet[2306]: E1106 23:37:28.983234 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.22:6443/api/v1/nodes\": dial tcp 10.128.0.22:6443: connect: connection refused" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:29.037267 kubelet[2306]: E1106 23:37:29.037172 2306 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 23:37:29.179955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741963222.mount: Deactivated successfully. Nov 6 23:37:29.191294 containerd[1471]: time="2025-11-06T23:37:29.190189486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:37:29.193705 containerd[1471]: time="2025-11-06T23:37:29.193590672Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:37:29.197019 containerd[1471]: time="2025-11-06T23:37:29.196946410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Nov 6 23:37:29.198074 containerd[1471]: time="2025-11-06T23:37:29.198008186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:37:29.200555 containerd[1471]: time="2025-11-06T23:37:29.200510244Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:37:29.204535 containerd[1471]: time="2025-11-06T23:37:29.203589405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:37:29.204535 containerd[1471]: time="2025-11-06T23:37:29.203828242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:37:29.207529 containerd[1471]: time="2025-11-06T23:37:29.207426245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:37:29.210507 containerd[1471]: time="2025-11-06T23:37:29.209988193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 431.327631ms" Nov 6 23:37:29.213367 containerd[1471]: time="2025-11-06T23:37:29.213303999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.241402ms" Nov 6 23:37:29.213648 containerd[1471]: time="2025-11-06T23:37:29.213590225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 415.24902ms" Nov 6 23:37:29.372293 kubelet[2306]: E1106 23:37:29.372132 2306 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 23:37:29.434339 containerd[1471]: time="2025-11-06T23:37:29.425503426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:37:29.434339 containerd[1471]: time="2025-11-06T23:37:29.433985323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:37:29.434339 containerd[1471]: time="2025-11-06T23:37:29.434023178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:29.434339 containerd[1471]: time="2025-11-06T23:37:29.434177103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:29.441491 containerd[1471]: time="2025-11-06T23:37:29.440847816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:37:29.441491 containerd[1471]: time="2025-11-06T23:37:29.440947789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:37:29.441491 containerd[1471]: time="2025-11-06T23:37:29.440981970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:29.441491 containerd[1471]: time="2025-11-06T23:37:29.441152643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:29.446048 containerd[1471]: time="2025-11-06T23:37:29.445145561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:37:29.446048 containerd[1471]: time="2025-11-06T23:37:29.445216516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:37:29.446048 containerd[1471]: time="2025-11-06T23:37:29.445244614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:29.446048 containerd[1471]: time="2025-11-06T23:37:29.445365525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:29.484751 systemd[1]: Started cri-containerd-d123de48013fec40a822b758e5ad08510f6c71fe652d5805899ccfbecdea8e1d.scope - libcontainer container d123de48013fec40a822b758e5ad08510f6c71fe652d5805899ccfbecdea8e1d. Nov 6 23:37:29.493892 systemd[1]: Started cri-containerd-a169052f081fb53a9d47adf3eb9f44070da2001335fe7720aa8344941d0eb309.scope - libcontainer container a169052f081fb53a9d47adf3eb9f44070da2001335fe7720aa8344941d0eb309. Nov 6 23:37:29.513035 systemd[1]: Started cri-containerd-180d91abd3bbed4e870c7cea638e7bcedeedfdca9f20117d9af7d2b513af8b19.scope - libcontainer container 180d91abd3bbed4e870c7cea638e7bcedeedfdca9f20117d9af7d2b513af8b19. Nov 6 23:37:29.585538 kubelet[2306]: E1106 23:37:29.584256 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf?timeout=10s\": dial tcp 10.128.0.22:6443: connect: connection refused" interval="1.6s" Nov 6 23:37:29.594790 containerd[1471]: time="2025-11-06T23:37:29.594737846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,Uid:979bad06e29551459f4c28e08f09224f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a169052f081fb53a9d47adf3eb9f44070da2001335fe7720aa8344941d0eb309\"" Nov 6 23:37:29.600338 kubelet[2306]: E1106 23:37:29.600292 2306 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38" Nov 6 23:37:29.609308 containerd[1471]: time="2025-11-06T23:37:29.609242498Z" level=info msg="CreateContainer within sandbox \"a169052f081fb53a9d47adf3eb9f44070da2001335fe7720aa8344941d0eb309\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 23:37:29.618056 kubelet[2306]: E1106 23:37:29.617988 2306 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 23:37:29.635530 containerd[1471]: time="2025-11-06T23:37:29.634362607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,Uid:53043d64241d37faf5fd6b64c4da617f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d123de48013fec40a822b758e5ad08510f6c71fe652d5805899ccfbecdea8e1d\"" Nov 6 23:37:29.639628 kubelet[2306]: E1106 23:37:29.637239 2306 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945" Nov 6 23:37:29.644846 containerd[1471]: time="2025-11-06T23:37:29.644788085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf,Uid:6503d0f5a27e440d8eb16551706b4c68,Namespace:kube-system,Attempt:0,} returns sandbox id \"180d91abd3bbed4e870c7cea638e7bcedeedfdca9f20117d9af7d2b513af8b19\"" Nov 6 23:37:29.645117 containerd[1471]: time="2025-11-06T23:37:29.645072225Z" level=info msg="CreateContainer within sandbox \"d123de48013fec40a822b758e5ad08510f6c71fe652d5805899ccfbecdea8e1d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 23:37:29.648958 containerd[1471]: time="2025-11-06T23:37:29.648894590Z" level=info msg="CreateContainer within sandbox \"a169052f081fb53a9d47adf3eb9f44070da2001335fe7720aa8344941d0eb309\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"548d3e5912ac4a82f5ec6655f6511da6a6ad053558509ee97535476e40893112\"" Nov 6 23:37:29.660980 containerd[1471]: time="2025-11-06T23:37:29.660911690Z" level=info msg="StartContainer for \"548d3e5912ac4a82f5ec6655f6511da6a6ad053558509ee97535476e40893112\"" Nov 6 23:37:29.662666 kubelet[2306]: E1106 23:37:29.662441 2306 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945" Nov 6 23:37:29.667383 containerd[1471]: time="2025-11-06T23:37:29.666885552Z" level=info msg="CreateContainer within sandbox \"d123de48013fec40a822b758e5ad08510f6c71fe652d5805899ccfbecdea8e1d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b9af239e6fb1a88d0395f96077ec0a85399f306109d90e3b17de7dde13a3371e\"" Nov 6 23:37:29.668237 containerd[1471]: time="2025-11-06T23:37:29.667999683Z" level=info msg="StartContainer for \"b9af239e6fb1a88d0395f96077ec0a85399f306109d90e3b17de7dde13a3371e\"" Nov 6 23:37:29.671492 containerd[1471]: time="2025-11-06T23:37:29.671262792Z" level=info msg="CreateContainer within sandbox \"180d91abd3bbed4e870c7cea638e7bcedeedfdca9f20117d9af7d2b513af8b19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 23:37:29.709334 containerd[1471]: time="2025-11-06T23:37:29.709169715Z" level=info msg="CreateContainer within sandbox \"180d91abd3bbed4e870c7cea638e7bcedeedfdca9f20117d9af7d2b513af8b19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d6a670eee614d6c663b34e0a22ec6cd32085f3822278b68b23540b322aa59bd\"" Nov 6 23:37:29.712238 containerd[1471]: time="2025-11-06T23:37:29.711918220Z" level=info msg="StartContainer for \"3d6a670eee614d6c663b34e0a22ec6cd32085f3822278b68b23540b322aa59bd\"" Nov 6 23:37:29.730745 systemd[1]: Started cri-containerd-b9af239e6fb1a88d0395f96077ec0a85399f306109d90e3b17de7dde13a3371e.scope - libcontainer container b9af239e6fb1a88d0395f96077ec0a85399f306109d90e3b17de7dde13a3371e. Nov 6 23:37:29.747188 systemd[1]: Started cri-containerd-548d3e5912ac4a82f5ec6655f6511da6a6ad053558509ee97535476e40893112.scope - libcontainer container 548d3e5912ac4a82f5ec6655f6511da6a6ad053558509ee97535476e40893112. Nov 6 23:37:29.786882 systemd[1]: Started cri-containerd-3d6a670eee614d6c663b34e0a22ec6cd32085f3822278b68b23540b322aa59bd.scope - libcontainer container 3d6a670eee614d6c663b34e0a22ec6cd32085f3822278b68b23540b322aa59bd. Nov 6 23:37:29.792029 kubelet[2306]: I1106 23:37:29.791280 2306 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:29.793737 kubelet[2306]: E1106 23:37:29.793684 2306 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.22:6443/api/v1/nodes\": dial tcp 10.128.0.22:6443: connect: connection refused" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:29.862894 containerd[1471]: time="2025-11-06T23:37:29.862838988Z" level=info msg="StartContainer for \"b9af239e6fb1a88d0395f96077ec0a85399f306109d90e3b17de7dde13a3371e\" returns successfully" Nov 6 23:37:29.887097 containerd[1471]: time="2025-11-06T23:37:29.886957583Z" level=info msg="StartContainer for \"548d3e5912ac4a82f5ec6655f6511da6a6ad053558509ee97535476e40893112\" returns successfully" Nov 6 23:37:29.935163 containerd[1471]: time="2025-11-06T23:37:29.935103696Z" level=info msg="StartContainer for \"3d6a670eee614d6c663b34e0a22ec6cd32085f3822278b68b23540b322aa59bd\" returns successfully" Nov 6 23:37:30.264325 kubelet[2306]: E1106 23:37:30.264272 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:30.264579 kubelet[2306]: E1106 23:37:30.264551 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:30.275637 kubelet[2306]: E1106 23:37:30.275588 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:30.302495 update_engine[1460]: I20251106 23:37:30.300513 1460 update_attempter.cc:509] Updating boot flags... Nov 6 23:37:30.420438 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2592) Nov 6 23:37:30.622619 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2591) Nov 6 23:37:30.870617 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2591) Nov 6 23:37:31.278097 kubelet[2306]: E1106 23:37:31.278050 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:31.278723 kubelet[2306]: E1106 23:37:31.278358 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:31.398455 kubelet[2306]: I1106 23:37:31.398408 2306 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:32.231771 kubelet[2306]: E1106 23:37:32.231722 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.029606 kubelet[2306]: E1106 23:37:33.029556 2306 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.473695 kubelet[2306]: E1106 23:37:33.473642 2306 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" not found" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.605496 kubelet[2306]: I1106 23:37:33.605036 2306 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.677101 kubelet[2306]: I1106 23:37:33.676804 2306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.710003 kubelet[2306]: E1106 23:37:33.709953 2306 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.710893 kubelet[2306]: I1106 23:37:33.710447 2306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.718639 kubelet[2306]: E1106 23:37:33.718321 2306 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.718639 kubelet[2306]: I1106 23:37:33.718365 2306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:33.731373 kubelet[2306]: E1106 23:37:33.731199 2306 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:34.134176 kubelet[2306]: I1106 23:37:34.134017 2306 apiserver.go:52] "Watching apiserver" Nov 6 23:37:34.171591 kubelet[2306]: I1106 23:37:34.171535 2306 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:37:34.906905 kubelet[2306]: I1106 23:37:34.906849 2306 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:34.914829 kubelet[2306]: I1106 23:37:34.914757 2306 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 23:37:35.386933 systemd[1]: Reload requested from client PID 2612 ('systemctl') (unit session-9.scope)... Nov 6 23:37:35.386957 systemd[1]: Reloading... Nov 6 23:37:35.543522 zram_generator::config[2660]: No configuration found. Nov 6 23:37:35.703191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:37:35.887144 systemd[1]: Reloading finished in 499 ms. Nov 6 23:37:35.924182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:35.938699 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:37:35.939050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:35.939131 systemd[1]: kubelet.service: Consumed 1.889s CPU time, 133.5M memory peak. Nov 6 23:37:35.952639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:37:36.318716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:37:36.330155 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:37:36.421067 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:37:36.423070 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:37:36.423070 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:37:36.423070 kubelet[2705]: I1106 23:37:36.421665 2705 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:37:36.433006 kubelet[2705]: I1106 23:37:36.432964 2705 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 23:37:36.433247 kubelet[2705]: I1106 23:37:36.433226 2705 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:37:36.434091 kubelet[2705]: I1106 23:37:36.434067 2705 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 23:37:36.436572 kubelet[2705]: I1106 23:37:36.436543 2705 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 23:37:36.441743 kubelet[2705]: I1106 23:37:36.441713 2705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:37:36.455940 kubelet[2705]: E1106 23:37:36.455881 2705 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:37:36.456162 kubelet[2705]: I1106 23:37:36.456140 2705 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:37:36.458194 sudo[2719]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 23:37:36.458843 sudo[2719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 23:37:36.462723 kubelet[2705]: I1106 23:37:36.462321 2705 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:37:36.463426 kubelet[2705]: I1106 23:37:36.462996 2705 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:37:36.463426 kubelet[2705]: I1106 23:37:36.463038 2705 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:37:36.463426 kubelet[2705]: I1106 23:37:36.463286 2705 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:37:36.463426 kubelet[2705]: I1106 23:37:36.463301 2705 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 23:37:36.463818 kubelet[2705]: I1106 23:37:36.463366 2705 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:37:36.464744 kubelet[2705]: I1106 23:37:36.464721 2705 kubelet.go:480] "Attempting to sync node with API server" Nov 6 23:37:36.464846 kubelet[2705]: I1106 23:37:36.464750 2705 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:37:36.464846 kubelet[2705]: I1106 23:37:36.464789 2705 kubelet.go:386] "Adding apiserver pod source" Nov 6 23:37:36.464846 kubelet[2705]: I1106 23:37:36.464814 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:37:36.471086 kubelet[2705]: I1106 23:37:36.467806 2705 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:37:36.471086 kubelet[2705]: I1106 23:37:36.468724 2705 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 23:37:36.518944 kubelet[2705]: I1106 23:37:36.517765 2705 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:37:36.518944 kubelet[2705]: I1106 23:37:36.517861 2705 server.go:1289] "Started kubelet" Nov 6 23:37:36.521022 kubelet[2705]: I1106 23:37:36.520426 2705 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:37:36.530419 kubelet[2705]: I1106 23:37:36.528274 2705 server.go:317] "Adding debug handlers to kubelet server" Nov 6 23:37:36.532972 kubelet[2705]: I1106 23:37:36.522876 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:37:36.534115 kubelet[2705]: I1106 23:37:36.523005 2705 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:37:36.534422 kubelet[2705]: I1106 23:37:36.522706 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:37:36.535875 kubelet[2705]: I1106 23:37:36.535321 2705 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:37:36.536441 kubelet[2705]: I1106 23:37:36.534437 2705 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:37:36.536582 kubelet[2705]: I1106 23:37:36.534285 2705 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:37:36.537391 kubelet[2705]: I1106 23:37:36.536906 2705 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:37:36.538494 kubelet[2705]: I1106 23:37:36.538030 2705 factory.go:223] Registration of the systemd container factory successfully Nov 6 23:37:36.538747 kubelet[2705]: I1106 23:37:36.538718 2705 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:37:36.543898 kubelet[2705]: E1106 23:37:36.543860 2705 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:37:36.544344 kubelet[2705]: I1106 23:37:36.544208 2705 factory.go:223] Registration of the containerd container factory successfully Nov 6 23:37:36.597222 kubelet[2705]: I1106 23:37:36.596793 2705 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 23:37:36.603157 kubelet[2705]: I1106 23:37:36.600233 2705 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 23:37:36.603157 kubelet[2705]: I1106 23:37:36.600266 2705 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 23:37:36.603157 kubelet[2705]: I1106 23:37:36.600293 2705 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:37:36.603157 kubelet[2705]: I1106 23:37:36.600307 2705 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 23:37:36.603157 kubelet[2705]: E1106 23:37:36.600440 2705 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:37:36.690944 kubelet[2705]: I1106 23:37:36.690903 2705 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:37:36.690944 kubelet[2705]: I1106 23:37:36.690934 2705 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:37:36.691173 kubelet[2705]: I1106 23:37:36.690963 2705 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:37:36.691173 kubelet[2705]: I1106 23:37:36.691156 2705 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 23:37:36.691272 kubelet[2705]: I1106 23:37:36.691171 2705 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 23:37:36.691272 kubelet[2705]: I1106 23:37:36.691199 2705 policy_none.go:49] "None policy: Start" Nov 6 23:37:36.691272 kubelet[2705]: I1106 23:37:36.691214 2705 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:37:36.691272 kubelet[2705]: I1106 23:37:36.691229 2705 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:37:36.692787 kubelet[2705]: I1106 23:37:36.691667 2705 state_mem.go:75] "Updated machine memory state" Nov 6 23:37:36.699333 kubelet[2705]: E1106 23:37:36.698935 2705 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 23:37:36.699333 kubelet[2705]: I1106 23:37:36.699155 2705 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:37:36.699333 kubelet[2705]: I1106 23:37:36.699170 2705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:37:36.700101 kubelet[2705]: I1106 23:37:36.699990 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:37:36.702886 kubelet[2705]: I1106 23:37:36.702308 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.709119 kubelet[2705]: I1106 23:37:36.708354 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.709119 kubelet[2705]: I1106 23:37:36.709122 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.712933 kubelet[2705]: E1106 23:37:36.712340 2705 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:37:36.736292 kubelet[2705]: I1106 23:37:36.735922 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 23:37:36.738831 kubelet[2705]: I1106 23:37:36.738438 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.738831 kubelet[2705]: I1106 23:37:36.738501 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6503d0f5a27e440d8eb16551706b4c68-kubeconfig\") pod \"kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"6503d0f5a27e440d8eb16551706b4c68\") " pod="kube-system/kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.738831 kubelet[2705]: I1106 23:37:36.738532 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53043d64241d37faf5fd6b64c4da617f-ca-certs\") pod \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"53043d64241d37faf5fd6b64c4da617f\") " pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.738831 kubelet[2705]: I1106 23:37:36.738560 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53043d64241d37faf5fd6b64c4da617f-k8s-certs\") pod \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"53043d64241d37faf5fd6b64c4da617f\") " pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.739163 kubelet[2705]: I1106 23:37:36.738591 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53043d64241d37faf5fd6b64c4da617f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"53043d64241d37faf5fd6b64c4da617f\") " pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.739163 kubelet[2705]: I1106 23:37:36.738622 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.739163 kubelet[2705]: I1106 23:37:36.738660 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.739163 kubelet[2705]: I1106 23:37:36.738689 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-ca-certs\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.739361 kubelet[2705]: I1106 23:37:36.738718 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/979bad06e29551459f4c28e08f09224f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" (UID: \"979bad06e29551459f4c28e08f09224f\") " pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.747289 kubelet[2705]: I1106 23:37:36.747216 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 23:37:36.751175 kubelet[2705]: I1106 23:37:36.750929 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 23:37:36.751175 kubelet[2705]: E1106 23:37:36.751006 2705 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.834502 kubelet[2705]: I1106 23:37:36.834176 2705 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.850589 kubelet[2705]: I1106 23:37:36.850059 2705 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:36.850589 kubelet[2705]: I1106 23:37:36.850158 2705 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:37.358978 sudo[2719]: pam_unix(sudo:session): session closed for user root Nov 6 23:37:37.466006 kubelet[2705]: I1106 23:37:37.465677 2705 apiserver.go:52] "Watching apiserver" Nov 6 23:37:37.537621 kubelet[2705]: I1106 23:37:37.537512 2705 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:37:37.669082 kubelet[2705]: I1106 23:37:37.668890 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:37.671653 kubelet[2705]: I1106 23:37:37.671349 2705 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:37.685146 kubelet[2705]: I1106 23:37:37.684155 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 23:37:37.685146 kubelet[2705]: E1106 23:37:37.684232 2705 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:37.686993 kubelet[2705]: I1106 23:37:37.686446 2705 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Nov 6 23:37:37.686993 kubelet[2705]: E1106 23:37:37.686830 2705 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" Nov 6 23:37:37.743525 kubelet[2705]: I1106 23:37:37.741981 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" podStartSLOduration=3.7419599850000003 podStartE2EDuration="3.741959985s" podCreationTimestamp="2025-11-06 23:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:37:37.723630135 +0000 UTC m=+1.385778025" watchObservedRunningTime="2025-11-06 23:37:37.741959985 +0000 UTC m=+1.404107840" Nov 6 23:37:37.743525 kubelet[2705]: I1106 23:37:37.742120 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" podStartSLOduration=1.7421151780000002 podStartE2EDuration="1.742115178s" podCreationTimestamp="2025-11-06 23:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:37:37.740708691 +0000 UTC m=+1.402856582" watchObservedRunningTime="2025-11-06 23:37:37.742115178 +0000 UTC m=+1.404263041" Nov 6 23:37:37.777803 kubelet[2705]: I1106 23:37:37.777282 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf" podStartSLOduration=1.777259651 podStartE2EDuration="1.777259651s" podCreationTimestamp="2025-11-06 23:37:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:37:37.756200483 +0000 UTC m=+1.418348343" watchObservedRunningTime="2025-11-06 23:37:37.777259651 +0000 UTC m=+1.439407512" Nov 6 23:37:39.527046 sudo[1758]: pam_unix(sudo:session): session closed for user root Nov 6 23:37:39.570190 sshd[1757]: Connection closed by 139.178.89.65 port 40792 Nov 6 23:37:39.571791 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Nov 6 23:37:39.578750 systemd[1]: sshd@8-10.128.0.22:22-139.178.89.65:40792.service: Deactivated successfully. Nov 6 23:37:39.582422 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 23:37:39.583090 systemd[1]: session-9.scope: Consumed 7.633s CPU time, 269M memory peak. Nov 6 23:37:39.585226 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Nov 6 23:37:39.586783 systemd-logind[1459]: Removed session 9. Nov 6 23:37:41.674531 kubelet[2705]: I1106 23:37:41.674257 2705 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 23:37:41.675739 containerd[1471]: time="2025-11-06T23:37:41.675696494Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 23:37:41.676229 kubelet[2705]: I1106 23:37:41.675980 2705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 23:37:42.812798 systemd[1]: Created slice kubepods-besteffort-pod4437b5d0_430c_4779_a5a0_d86ed421e8f1.slice - libcontainer container kubepods-besteffort-pod4437b5d0_430c_4779_a5a0_d86ed421e8f1.slice. Nov 6 23:37:42.839649 systemd[1]: Created slice kubepods-burstable-pod83df6523_ec5e_46af_8792_b01a49937de4.slice - libcontainer container kubepods-burstable-pod83df6523_ec5e_46af_8792_b01a49937de4.slice. Nov 6 23:37:42.880706 kubelet[2705]: I1106 23:37:42.879988 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cilium-cgroup\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.880706 kubelet[2705]: I1106 23:37:42.880050 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-xtables-lock\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.880706 kubelet[2705]: I1106 23:37:42.880082 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83df6523-ec5e-46af-8792-b01a49937de4-clustermesh-secrets\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.880706 kubelet[2705]: I1106 23:37:42.880111 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-host-proc-sys-kernel\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.880706 kubelet[2705]: I1106 23:37:42.880148 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83df6523-ec5e-46af-8792-b01a49937de4-hubble-tls\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.882720 kubelet[2705]: I1106 23:37:42.880175 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6g8\" (UniqueName: \"kubernetes.io/projected/83df6523-ec5e-46af-8792-b01a49937de4-kube-api-access-jw6g8\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.882720 kubelet[2705]: I1106 23:37:42.880217 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4437b5d0-430c-4779-a5a0-d86ed421e8f1-kube-proxy\") pod \"kube-proxy-6bn69\" (UID: \"4437b5d0-430c-4779-a5a0-d86ed421e8f1\") " pod="kube-system/kube-proxy-6bn69" Nov 6 23:37:42.882720 kubelet[2705]: I1106 23:37:42.880247 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cilium-run\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.882720 kubelet[2705]: I1106 23:37:42.880279 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-hostproc\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.882720 kubelet[2705]: I1106 23:37:42.880304 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-host-proc-sys-net\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.882720 kubelet[2705]: I1106 23:37:42.880344 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4437b5d0-430c-4779-a5a0-d86ed421e8f1-lib-modules\") pod \"kube-proxy-6bn69\" (UID: \"4437b5d0-430c-4779-a5a0-d86ed421e8f1\") " pod="kube-system/kube-proxy-6bn69" Nov 6 23:37:42.883022 kubelet[2705]: I1106 23:37:42.880369 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-bpf-maps\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.883022 kubelet[2705]: I1106 23:37:42.880393 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-etc-cni-netd\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.883022 kubelet[2705]: I1106 23:37:42.880420 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4437b5d0-430c-4779-a5a0-d86ed421e8f1-xtables-lock\") pod \"kube-proxy-6bn69\" (UID: \"4437b5d0-430c-4779-a5a0-d86ed421e8f1\") " pod="kube-system/kube-proxy-6bn69" Nov 6 23:37:42.883022 kubelet[2705]: I1106 23:37:42.880448 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l7gp\" (UniqueName: \"kubernetes.io/projected/4437b5d0-430c-4779-a5a0-d86ed421e8f1-kube-api-access-2l7gp\") pod \"kube-proxy-6bn69\" (UID: \"4437b5d0-430c-4779-a5a0-d86ed421e8f1\") " pod="kube-system/kube-proxy-6bn69" Nov 6 23:37:42.883022 kubelet[2705]: I1106 23:37:42.880491 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cni-path\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.883022 kubelet[2705]: I1106 23:37:42.880543 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-lib-modules\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.883318 kubelet[2705]: I1106 23:37:42.880605 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83df6523-ec5e-46af-8792-b01a49937de4-cilium-config-path\") pod \"cilium-mll8g\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " pod="kube-system/cilium-mll8g" Nov 6 23:37:42.889220 kubelet[2705]: I1106 23:37:42.888568 2705 status_manager.go:895] "Failed to get status for pod" podUID="8a1f9b4f-6ab5-4a17-9c76-891197dc40b3" pod="kube-system/cilium-operator-6c4d7847fc-g88cv" err="pods \"cilium-operator-6c4d7847fc-g88cv\" is forbidden: User \"system:node:ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf' and this object" Nov 6 23:37:42.900816 systemd[1]: Created slice kubepods-besteffort-pod8a1f9b4f_6ab5_4a17_9c76_891197dc40b3.slice - libcontainer container kubepods-besteffort-pod8a1f9b4f_6ab5_4a17_9c76_891197dc40b3.slice. Nov 6 23:37:42.981759 kubelet[2705]: I1106 23:37:42.981700 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tw2\" (UniqueName: \"kubernetes.io/projected/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3-kube-api-access-66tw2\") pod \"cilium-operator-6c4d7847fc-g88cv\" (UID: \"8a1f9b4f-6ab5-4a17-9c76-891197dc40b3\") " pod="kube-system/cilium-operator-6c4d7847fc-g88cv" Nov 6 23:37:42.981979 kubelet[2705]: I1106 23:37:42.981807 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g88cv\" (UID: \"8a1f9b4f-6ab5-4a17-9c76-891197dc40b3\") " pod="kube-system/cilium-operator-6c4d7847fc-g88cv" Nov 6 23:37:43.125524 containerd[1471]: time="2025-11-06T23:37:43.125335961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bn69,Uid:4437b5d0-430c-4779-a5a0-d86ed421e8f1,Namespace:kube-system,Attempt:0,}" Nov 6 23:37:43.149407 containerd[1471]: time="2025-11-06T23:37:43.148798441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mll8g,Uid:83df6523-ec5e-46af-8792-b01a49937de4,Namespace:kube-system,Attempt:0,}" Nov 6 23:37:43.168640 containerd[1471]: time="2025-11-06T23:37:43.168056951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:37:43.168640 containerd[1471]: time="2025-11-06T23:37:43.168132457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:37:43.168640 containerd[1471]: time="2025-11-06T23:37:43.168160455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:43.168640 containerd[1471]: time="2025-11-06T23:37:43.168296613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:43.202766 systemd[1]: Started cri-containerd-ac536a553fac37438aef0863cb9748ef13ca9521d50b4e7324287da03a6ec50f.scope - libcontainer container ac536a553fac37438aef0863cb9748ef13ca9521d50b4e7324287da03a6ec50f. Nov 6 23:37:43.205895 containerd[1471]: time="2025-11-06T23:37:43.204916990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g88cv,Uid:8a1f9b4f-6ab5-4a17-9c76-891197dc40b3,Namespace:kube-system,Attempt:0,}" Nov 6 23:37:43.268919 containerd[1471]: time="2025-11-06T23:37:43.268688695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bn69,Uid:4437b5d0-430c-4779-a5a0-d86ed421e8f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac536a553fac37438aef0863cb9748ef13ca9521d50b4e7324287da03a6ec50f\"" Nov 6 23:37:43.285203 containerd[1471]: time="2025-11-06T23:37:43.281328875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:37:43.285203 containerd[1471]: time="2025-11-06T23:37:43.281410429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:37:43.285203 containerd[1471]: time="2025-11-06T23:37:43.281441077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:43.285203 containerd[1471]: time="2025-11-06T23:37:43.283811637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:43.290877 containerd[1471]: time="2025-11-06T23:37:43.290830687Z" level=info msg="CreateContainer within sandbox \"ac536a553fac37438aef0863cb9748ef13ca9521d50b4e7324287da03a6ec50f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 23:37:43.305199 containerd[1471]: time="2025-11-06T23:37:43.305043606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:37:43.305601 containerd[1471]: time="2025-11-06T23:37:43.305542107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:37:43.306351 containerd[1471]: time="2025-11-06T23:37:43.306191464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:43.313586 containerd[1471]: time="2025-11-06T23:37:43.310170268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:37:43.348735 systemd[1]: Started cri-containerd-df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece.scope - libcontainer container df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece. Nov 6 23:37:43.356639 containerd[1471]: time="2025-11-06T23:37:43.355193209Z" level=info msg="CreateContainer within sandbox \"ac536a553fac37438aef0863cb9748ef13ca9521d50b4e7324287da03a6ec50f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7170b5bd725317751056f1fae129511fba0427acd49bd05bfe77b3ae25111ba7\"" Nov 6 23:37:43.358132 containerd[1471]: time="2025-11-06T23:37:43.358082791Z" level=info msg="StartContainer for \"7170b5bd725317751056f1fae129511fba0427acd49bd05bfe77b3ae25111ba7\"" Nov 6 23:37:43.391207 systemd[1]: Started cri-containerd-26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176.scope - libcontainer container 26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176. Nov 6 23:37:43.451780 containerd[1471]: time="2025-11-06T23:37:43.451729252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mll8g,Uid:83df6523-ec5e-46af-8792-b01a49937de4,Namespace:kube-system,Attempt:0,} returns sandbox id \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\"" Nov 6 23:37:43.459821 containerd[1471]: time="2025-11-06T23:37:43.459775167Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 23:37:43.472021 systemd[1]: Started cri-containerd-7170b5bd725317751056f1fae129511fba0427acd49bd05bfe77b3ae25111ba7.scope - libcontainer container 7170b5bd725317751056f1fae129511fba0427acd49bd05bfe77b3ae25111ba7. Nov 6 23:37:43.504267 containerd[1471]: time="2025-11-06T23:37:43.503877784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g88cv,Uid:8a1f9b4f-6ab5-4a17-9c76-891197dc40b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\"" Nov 6 23:37:43.540859 containerd[1471]: time="2025-11-06T23:37:43.540790558Z" level=info msg="StartContainer for \"7170b5bd725317751056f1fae129511fba0427acd49bd05bfe77b3ae25111ba7\" returns successfully" Nov 6 23:37:43.730517 kubelet[2705]: I1106 23:37:43.730312 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6bn69" podStartSLOduration=1.7302828799999999 podStartE2EDuration="1.73028288s" podCreationTimestamp="2025-11-06 23:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:37:43.710753065 +0000 UTC m=+7.372900938" watchObservedRunningTime="2025-11-06 23:37:43.73028288 +0000 UTC m=+7.392430735" Nov 6 23:37:51.076538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578831985.mount: Deactivated successfully. Nov 6 23:37:54.240593 containerd[1471]: time="2025-11-06T23:37:54.240520218Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:54.242490 containerd[1471]: time="2025-11-06T23:37:54.242200568Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 23:37:54.244298 containerd[1471]: time="2025-11-06T23:37:54.243796135Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:54.246200 containerd[1471]: time="2025-11-06T23:37:54.246146402Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.786003031s" Nov 6 23:37:54.246328 containerd[1471]: time="2025-11-06T23:37:54.246199792Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 23:37:54.248185 containerd[1471]: time="2025-11-06T23:37:54.248146394Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 23:37:54.253698 containerd[1471]: time="2025-11-06T23:37:54.253648979Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:37:54.279816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3214348285.mount: Deactivated successfully. Nov 6 23:37:54.280275 containerd[1471]: time="2025-11-06T23:37:54.280036543Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\"" Nov 6 23:37:54.284074 containerd[1471]: time="2025-11-06T23:37:54.282603214Z" level=info msg="StartContainer for \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\"" Nov 6 23:37:54.332357 systemd[1]: run-containerd-runc-k8s.io-9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251-runc.iaGTBO.mount: Deactivated successfully. Nov 6 23:37:54.340697 systemd[1]: Started cri-containerd-9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251.scope - libcontainer container 9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251. Nov 6 23:37:54.380810 containerd[1471]: time="2025-11-06T23:37:54.380672024Z" level=info msg="StartContainer for \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\" returns successfully" Nov 6 23:37:54.395307 systemd[1]: cri-containerd-9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251.scope: Deactivated successfully. Nov 6 23:37:55.269453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251-rootfs.mount: Deactivated successfully. Nov 6 23:37:56.239388 containerd[1471]: time="2025-11-06T23:37:56.239251527Z" level=info msg="shim disconnected" id=9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251 namespace=k8s.io Nov 6 23:37:56.239388 containerd[1471]: time="2025-11-06T23:37:56.239326234Z" level=warning msg="cleaning up after shim disconnected" id=9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251 namespace=k8s.io Nov 6 23:37:56.239388 containerd[1471]: time="2025-11-06T23:37:56.239340406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:37:56.710932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154165429.mount: Deactivated successfully. Nov 6 23:37:56.763499 containerd[1471]: time="2025-11-06T23:37:56.763037527Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:37:56.800414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459270916.mount: Deactivated successfully. Nov 6 23:37:56.814427 containerd[1471]: time="2025-11-06T23:37:56.814049069Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\"" Nov 6 23:37:56.818096 containerd[1471]: time="2025-11-06T23:37:56.817198274Z" level=info msg="StartContainer for \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\"" Nov 6 23:37:56.915778 systemd[1]: Started cri-containerd-eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71.scope - libcontainer container eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71. Nov 6 23:37:56.999040 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:37:57.000169 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:37:57.002708 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:37:57.009589 containerd[1471]: time="2025-11-06T23:37:57.008425455Z" level=info msg="StartContainer for \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\" returns successfully" Nov 6 23:37:57.014247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:37:57.014682 systemd[1]: cri-containerd-eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71.scope: Deactivated successfully. Nov 6 23:37:57.058549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:37:57.085439 containerd[1471]: time="2025-11-06T23:37:57.085351858Z" level=info msg="shim disconnected" id=eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71 namespace=k8s.io Nov 6 23:37:57.085814 containerd[1471]: time="2025-11-06T23:37:57.085787340Z" level=warning msg="cleaning up after shim disconnected" id=eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71 namespace=k8s.io Nov 6 23:37:57.085954 containerd[1471]: time="2025-11-06T23:37:57.085934180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:37:57.692930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71-rootfs.mount: Deactivated successfully. Nov 6 23:37:57.759816 containerd[1471]: time="2025-11-06T23:37:57.759620074Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:37:57.805672 containerd[1471]: time="2025-11-06T23:37:57.803274977Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\"" Nov 6 23:37:57.807046 containerd[1471]: time="2025-11-06T23:37:57.806430224Z" level=info msg="StartContainer for \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\"" Nov 6 23:37:57.835494 containerd[1471]: time="2025-11-06T23:37:57.834285772Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:57.838517 containerd[1471]: time="2025-11-06T23:37:57.838435875Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 23:37:57.840508 containerd[1471]: time="2025-11-06T23:37:57.839941279Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:37:57.850863 containerd[1471]: time="2025-11-06T23:37:57.850795067Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.60259893s" Nov 6 23:37:57.850863 containerd[1471]: time="2025-11-06T23:37:57.850848612Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 23:37:57.858176 containerd[1471]: time="2025-11-06T23:37:57.857876481Z" level=info msg="CreateContainer within sandbox \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 23:37:57.888492 containerd[1471]: time="2025-11-06T23:37:57.888397889Z" level=info msg="CreateContainer within sandbox \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\"" Nov 6 23:37:57.888709 systemd[1]: Started cri-containerd-b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e.scope - libcontainer container b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e. Nov 6 23:37:57.891398 containerd[1471]: time="2025-11-06T23:37:57.890150580Z" level=info msg="StartContainer for \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\"" Nov 6 23:37:57.942835 systemd[1]: Started cri-containerd-8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201.scope - libcontainer container 8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201. Nov 6 23:37:57.957072 containerd[1471]: time="2025-11-06T23:37:57.956945208Z" level=info msg="StartContainer for \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\" returns successfully" Nov 6 23:37:57.965882 systemd[1]: cri-containerd-b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e.scope: Deactivated successfully. Nov 6 23:37:58.023122 containerd[1471]: time="2025-11-06T23:37:58.023056914Z" level=info msg="StartContainer for \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\" returns successfully" Nov 6 23:37:58.176320 containerd[1471]: time="2025-11-06T23:37:58.176239389Z" level=info msg="shim disconnected" id=b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e namespace=k8s.io Nov 6 23:37:58.176320 containerd[1471]: time="2025-11-06T23:37:58.176315656Z" level=warning msg="cleaning up after shim disconnected" id=b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e namespace=k8s.io Nov 6 23:37:58.176734 containerd[1471]: time="2025-11-06T23:37:58.176332480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:37:58.695412 systemd[1]: run-containerd-runc-k8s.io-b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e-runc.8xaYwP.mount: Deactivated successfully. Nov 6 23:37:58.697669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e-rootfs.mount: Deactivated successfully. Nov 6 23:37:58.769388 containerd[1471]: time="2025-11-06T23:37:58.769335561Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:37:58.804776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368773002.mount: Deactivated successfully. Nov 6 23:37:58.809497 containerd[1471]: time="2025-11-06T23:37:58.807626800Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\"" Nov 6 23:37:58.810791 containerd[1471]: time="2025-11-06T23:37:58.810711465Z" level=info msg="StartContainer for \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\"" Nov 6 23:37:58.901257 systemd[1]: Started cri-containerd-d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446.scope - libcontainer container d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446. Nov 6 23:37:58.972584 kubelet[2705]: I1106 23:37:58.971610 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g88cv" podStartSLOduration=2.626699076 podStartE2EDuration="16.971583147s" podCreationTimestamp="2025-11-06 23:37:42 +0000 UTC" firstStartedPulling="2025-11-06 23:37:43.506672468 +0000 UTC m=+7.168820321" lastFinishedPulling="2025-11-06 23:37:57.85155654 +0000 UTC m=+21.513704392" observedRunningTime="2025-11-06 23:37:58.875966324 +0000 UTC m=+22.538114186" watchObservedRunningTime="2025-11-06 23:37:58.971583147 +0000 UTC m=+22.633731009" Nov 6 23:37:59.033767 systemd[1]: cri-containerd-d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446.scope: Deactivated successfully. Nov 6 23:37:59.036913 containerd[1471]: time="2025-11-06T23:37:59.036752635Z" level=info msg="StartContainer for \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\" returns successfully" Nov 6 23:37:59.089354 containerd[1471]: time="2025-11-06T23:37:59.089268489Z" level=info msg="shim disconnected" id=d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446 namespace=k8s.io Nov 6 23:37:59.089354 containerd[1471]: time="2025-11-06T23:37:59.089342719Z" level=warning msg="cleaning up after shim disconnected" id=d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446 namespace=k8s.io Nov 6 23:37:59.089354 containerd[1471]: time="2025-11-06T23:37:59.089358194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:37:59.120628 containerd[1471]: time="2025-11-06T23:37:59.120556470Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:37:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:37:59.691432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446-rootfs.mount: Deactivated successfully. Nov 6 23:37:59.776118 containerd[1471]: time="2025-11-06T23:37:59.776057763Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:37:59.812234 containerd[1471]: time="2025-11-06T23:37:59.808895461Z" level=info msg="CreateContainer within sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\"" Nov 6 23:37:59.812234 containerd[1471]: time="2025-11-06T23:37:59.810297020Z" level=info msg="StartContainer for \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\"" Nov 6 23:37:59.876784 systemd[1]: Started cri-containerd-5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0.scope - libcontainer container 5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0. Nov 6 23:37:59.922622 containerd[1471]: time="2025-11-06T23:37:59.922543727Z" level=info msg="StartContainer for \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\" returns successfully" Nov 6 23:38:00.166387 kubelet[2705]: I1106 23:38:00.166338 2705 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 23:38:00.231552 systemd[1]: Created slice kubepods-burstable-pod3e293d08_65f3_4e08_bbb6_e12059ed3982.slice - libcontainer container kubepods-burstable-pod3e293d08_65f3_4e08_bbb6_e12059ed3982.slice. Nov 6 23:38:00.249565 systemd[1]: Created slice kubepods-burstable-pod7235d1a2_2058_4379_9b1a_b895e47b55c7.slice - libcontainer container kubepods-burstable-pod7235d1a2_2058_4379_9b1a_b895e47b55c7.slice. Nov 6 23:38:00.318026 kubelet[2705]: I1106 23:38:00.317818 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7235d1a2-2058-4379-9b1a-b895e47b55c7-config-volume\") pod \"coredns-674b8bbfcf-rrvnd\" (UID: \"7235d1a2-2058-4379-9b1a-b895e47b55c7\") " pod="kube-system/coredns-674b8bbfcf-rrvnd" Nov 6 23:38:00.318664 kubelet[2705]: I1106 23:38:00.318384 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e293d08-65f3-4e08-bbb6-e12059ed3982-config-volume\") pod \"coredns-674b8bbfcf-5pgwz\" (UID: \"3e293d08-65f3-4e08-bbb6-e12059ed3982\") " pod="kube-system/coredns-674b8bbfcf-5pgwz" Nov 6 23:38:00.318664 kubelet[2705]: I1106 23:38:00.318540 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlpw5\" (UniqueName: \"kubernetes.io/projected/7235d1a2-2058-4379-9b1a-b895e47b55c7-kube-api-access-mlpw5\") pod \"coredns-674b8bbfcf-rrvnd\" (UID: \"7235d1a2-2058-4379-9b1a-b895e47b55c7\") " pod="kube-system/coredns-674b8bbfcf-rrvnd" Nov 6 23:38:00.318664 kubelet[2705]: I1106 23:38:00.318628 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbl7\" (UniqueName: \"kubernetes.io/projected/3e293d08-65f3-4e08-bbb6-e12059ed3982-kube-api-access-sdbl7\") pod \"coredns-674b8bbfcf-5pgwz\" (UID: \"3e293d08-65f3-4e08-bbb6-e12059ed3982\") " pod="kube-system/coredns-674b8bbfcf-5pgwz" Nov 6 23:38:00.540999 containerd[1471]: time="2025-11-06T23:38:00.540837040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5pgwz,Uid:3e293d08-65f3-4e08-bbb6-e12059ed3982,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:00.558986 containerd[1471]: time="2025-11-06T23:38:00.558921336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrvnd,Uid:7235d1a2-2058-4379-9b1a-b895e47b55c7,Namespace:kube-system,Attempt:0,}" Nov 6 23:38:00.810378 kubelet[2705]: I1106 23:38:00.809843 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mll8g" podStartSLOduration=8.019121838 podStartE2EDuration="18.809812839s" podCreationTimestamp="2025-11-06 23:37:42 +0000 UTC" firstStartedPulling="2025-11-06 23:37:43.457185135 +0000 UTC m=+7.119332988" lastFinishedPulling="2025-11-06 23:37:54.247876128 +0000 UTC m=+17.910023989" observedRunningTime="2025-11-06 23:38:00.806092607 +0000 UTC m=+24.468240623" watchObservedRunningTime="2025-11-06 23:38:00.809812839 +0000 UTC m=+24.471960701" Nov 6 23:38:02.530336 systemd-networkd[1382]: cilium_host: Link UP Nov 6 23:38:02.539688 systemd-networkd[1382]: cilium_net: Link UP Nov 6 23:38:02.540020 systemd-networkd[1382]: cilium_net: Gained carrier Nov 6 23:38:02.540300 systemd-networkd[1382]: cilium_host: Gained carrier Nov 6 23:38:02.541425 systemd-networkd[1382]: cilium_net: Gained IPv6LL Nov 6 23:38:02.689655 systemd-networkd[1382]: cilium_vxlan: Link UP Nov 6 23:38:02.689882 systemd-networkd[1382]: cilium_vxlan: Gained carrier Nov 6 23:38:02.975518 kernel: NET: Registered PF_ALG protocol family Nov 6 23:38:03.122663 systemd-networkd[1382]: cilium_host: Gained IPv6LL Nov 6 23:38:03.762929 systemd-networkd[1382]: cilium_vxlan: Gained IPv6LL Nov 6 23:38:03.878542 systemd-networkd[1382]: lxc_health: Link UP Nov 6 23:38:03.885126 systemd-networkd[1382]: lxc_health: Gained carrier Nov 6 23:38:04.121784 systemd-networkd[1382]: lxcaf5019cf42c5: Link UP Nov 6 23:38:04.136589 kernel: eth0: renamed from tmp9ed1d Nov 6 23:38:04.141952 systemd-networkd[1382]: lxcaf5019cf42c5: Gained carrier Nov 6 23:38:04.184966 systemd-networkd[1382]: lxc06473bfd6e98: Link UP Nov 6 23:38:04.189535 kernel: eth0: renamed from tmp2e274 Nov 6 23:38:04.202329 systemd-networkd[1382]: lxc06473bfd6e98: Gained carrier Nov 6 23:38:05.234734 systemd-networkd[1382]: lxc_health: Gained IPv6LL Nov 6 23:38:05.426869 systemd-networkd[1382]: lxc06473bfd6e98: Gained IPv6LL Nov 6 23:38:05.938966 systemd-networkd[1382]: lxcaf5019cf42c5: Gained IPv6LL Nov 6 23:38:08.086523 ntpd[1441]: Listen normally on 7 cilium_host 192.168.0.98:123 Nov 6 23:38:08.086669 ntpd[1441]: Listen normally on 8 cilium_net [fe80::7c6f:a6ff:fe57:d7d5%4]:123 Nov 6 23:38:08.087197 ntpd[1441]: 6 Nov 23:38:08 ntpd[1441]: Listen normally on 7 cilium_host 192.168.0.98:123 Nov 6 23:38:08.087197 ntpd[1441]: 6 Nov 23:38:08 ntpd[1441]: Listen normally on 8 cilium_net [fe80::7c6f:a6ff:fe57:d7d5%4]:123 Nov 6 23:38:08.087197 ntpd[1441]: 6 Nov 23:38:08 ntpd[1441]: Listen normally on 9 cilium_host [fe80::1480:dff:fee2:bab2%5]:123 Nov 6 23:38:08.087197 ntpd[1441]: 6 Nov 23:38:08 ntpd[1441]: Listen normally on 10 cilium_vxlan [fe80::8cc4:a2ff:fe86:b9cf%6]:123 Nov 6 23:38:08.087197 ntpd[1441]: 6 Nov 23:38:08 ntpd[1441]: Listen normally on 11 lxc_health [fe80::1c82:4eff:fef1:5e0e%8]:123 Nov 6 23:38:08.087197 ntpd[1441]: 6 Nov 23:38:08 ntpd[1441]: Listen normally on 12 lxcaf5019cf42c5 [fe80::f7:27ff:fe7a:e093%10]:123 Nov 6 23:38:08.087197 ntpd[1441]: 6 Nov 23:38:08 ntpd[1441]: Listen normally on 13 lxc06473bfd6e98 [fe80::444e:aaff:fefa:9523%12]:123 Nov 6 23:38:08.086757 ntpd[1441]: Listen normally on 9 cilium_host [fe80::1480:dff:fee2:bab2%5]:123 Nov 6 23:38:08.086835 ntpd[1441]: Listen normally on 10 cilium_vxlan [fe80::8cc4:a2ff:fe86:b9cf%6]:123 Nov 6 23:38:08.086895 ntpd[1441]: Listen normally on 11 lxc_health [fe80::1c82:4eff:fef1:5e0e%8]:123 Nov 6 23:38:08.086953 ntpd[1441]: Listen normally on 12 lxcaf5019cf42c5 [fe80::f7:27ff:fe7a:e093%10]:123 Nov 6 23:38:08.087010 ntpd[1441]: Listen normally on 13 lxc06473bfd6e98 [fe80::444e:aaff:fefa:9523%12]:123 Nov 6 23:38:09.485179 containerd[1471]: time="2025-11-06T23:38:09.485029065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:09.488704 containerd[1471]: time="2025-11-06T23:38:09.486696169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:09.488704 containerd[1471]: time="2025-11-06T23:38:09.488584069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:09.489390 containerd[1471]: time="2025-11-06T23:38:09.489228205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:09.494511 containerd[1471]: time="2025-11-06T23:38:09.494051143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:38:09.494511 containerd[1471]: time="2025-11-06T23:38:09.494142378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:38:09.494511 containerd[1471]: time="2025-11-06T23:38:09.494172509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:09.494511 containerd[1471]: time="2025-11-06T23:38:09.494341787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:38:09.573768 systemd[1]: Started cri-containerd-2e2747ba5d10d8eaf57e4a3ef88d115e058bcb946bf386a1547c246b8346e6b6.scope - libcontainer container 2e2747ba5d10d8eaf57e4a3ef88d115e058bcb946bf386a1547c246b8346e6b6. Nov 6 23:38:09.576614 systemd[1]: Started cri-containerd-9ed1d6d298f379045368a8988774f1bdf9d936ff55e6ca0bff68c80674ab8d6f.scope - libcontainer container 9ed1d6d298f379045368a8988774f1bdf9d936ff55e6ca0bff68c80674ab8d6f. Nov 6 23:38:09.664536 containerd[1471]: time="2025-11-06T23:38:09.664388503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rrvnd,Uid:7235d1a2-2058-4379-9b1a-b895e47b55c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2747ba5d10d8eaf57e4a3ef88d115e058bcb946bf386a1547c246b8346e6b6\"" Nov 6 23:38:09.675066 containerd[1471]: time="2025-11-06T23:38:09.674863330Z" level=info msg="CreateContainer within sandbox \"2e2747ba5d10d8eaf57e4a3ef88d115e058bcb946bf386a1547c246b8346e6b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:38:09.709980 containerd[1471]: time="2025-11-06T23:38:09.708247808Z" level=info msg="CreateContainer within sandbox \"2e2747ba5d10d8eaf57e4a3ef88d115e058bcb946bf386a1547c246b8346e6b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1df25fb4a87062b98ad1dce835dfce121af8b42cf02257b55ae416f08fc6a408\"" Nov 6 23:38:09.712705 containerd[1471]: time="2025-11-06T23:38:09.712645413Z" level=info msg="StartContainer for \"1df25fb4a87062b98ad1dce835dfce121af8b42cf02257b55ae416f08fc6a408\"" Nov 6 23:38:09.727525 containerd[1471]: time="2025-11-06T23:38:09.727413731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5pgwz,Uid:3e293d08-65f3-4e08-bbb6-e12059ed3982,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ed1d6d298f379045368a8988774f1bdf9d936ff55e6ca0bff68c80674ab8d6f\"" Nov 6 23:38:09.739217 containerd[1471]: time="2025-11-06T23:38:09.738625228Z" level=info msg="CreateContainer within sandbox \"9ed1d6d298f379045368a8988774f1bdf9d936ff55e6ca0bff68c80674ab8d6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:38:09.769004 containerd[1471]: time="2025-11-06T23:38:09.768933503Z" level=info msg="CreateContainer within sandbox \"9ed1d6d298f379045368a8988774f1bdf9d936ff55e6ca0bff68c80674ab8d6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fab16cd6589d21bc501b804042306e987ce1a2353d64ea009e511408cbcfe83\"" Nov 6 23:38:09.771521 containerd[1471]: time="2025-11-06T23:38:09.771455976Z" level=info msg="StartContainer for \"3fab16cd6589d21bc501b804042306e987ce1a2353d64ea009e511408cbcfe83\"" Nov 6 23:38:09.794340 systemd[1]: Started cri-containerd-1df25fb4a87062b98ad1dce835dfce121af8b42cf02257b55ae416f08fc6a408.scope - libcontainer container 1df25fb4a87062b98ad1dce835dfce121af8b42cf02257b55ae416f08fc6a408. Nov 6 23:38:09.865965 systemd[1]: Started cri-containerd-3fab16cd6589d21bc501b804042306e987ce1a2353d64ea009e511408cbcfe83.scope - libcontainer container 3fab16cd6589d21bc501b804042306e987ce1a2353d64ea009e511408cbcfe83. Nov 6 23:38:09.890802 containerd[1471]: time="2025-11-06T23:38:09.890743080Z" level=info msg="StartContainer for \"1df25fb4a87062b98ad1dce835dfce121af8b42cf02257b55ae416f08fc6a408\" returns successfully" Nov 6 23:38:09.936781 containerd[1471]: time="2025-11-06T23:38:09.936725399Z" level=info msg="StartContainer for \"3fab16cd6589d21bc501b804042306e987ce1a2353d64ea009e511408cbcfe83\" returns successfully" Nov 6 23:38:10.648999 kubelet[2705]: I1106 23:38:10.648757 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 23:38:10.870495 kubelet[2705]: I1106 23:38:10.868278 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5pgwz" podStartSLOduration=28.86825585 podStartE2EDuration="28.86825585s" podCreationTimestamp="2025-11-06 23:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:38:10.867860837 +0000 UTC m=+34.530008699" watchObservedRunningTime="2025-11-06 23:38:10.86825585 +0000 UTC m=+34.530403712" Nov 6 23:38:10.870495 kubelet[2705]: I1106 23:38:10.868445 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rrvnd" podStartSLOduration=28.868430513 podStartE2EDuration="28.868430513s" podCreationTimestamp="2025-11-06 23:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:38:10.847023044 +0000 UTC m=+34.509170906" watchObservedRunningTime="2025-11-06 23:38:10.868430513 +0000 UTC m=+34.530578374" Nov 6 23:38:29.484977 systemd[1]: Started sshd@9-10.128.0.22:22-139.178.89.65:54584.service - OpenSSH per-connection server daemon (139.178.89.65:54584). Nov 6 23:38:29.774768 sshd[4084]: Accepted publickey for core from 139.178.89.65 port 54584 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:38:29.776796 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:38:29.784538 systemd-logind[1459]: New session 10 of user core. Nov 6 23:38:29.792793 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 23:38:30.088295 sshd[4086]: Connection closed by 139.178.89.65 port 54584 Nov 6 23:38:30.089616 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:30.094110 systemd[1]: sshd@9-10.128.0.22:22-139.178.89.65:54584.service: Deactivated successfully. Nov 6 23:38:30.097107 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 23:38:30.099791 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Nov 6 23:38:30.101803 systemd-logind[1459]: Removed session 10. Nov 6 23:38:35.147942 systemd[1]: Started sshd@10-10.128.0.22:22-139.178.89.65:54592.service - OpenSSH per-connection server daemon (139.178.89.65:54592). Nov 6 23:38:35.441376 sshd[4099]: Accepted publickey for core from 139.178.89.65 port 54592 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:38:35.443938 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:38:35.449882 systemd-logind[1459]: New session 11 of user core. Nov 6 23:38:35.460792 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 23:38:35.738612 sshd[4101]: Connection closed by 139.178.89.65 port 54592 Nov 6 23:38:35.739432 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:35.746371 systemd[1]: sshd@10-10.128.0.22:22-139.178.89.65:54592.service: Deactivated successfully. Nov 6 23:38:35.751055 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 23:38:35.754980 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Nov 6 23:38:35.756506 systemd-logind[1459]: Removed session 11. Nov 6 23:38:40.794393 systemd[1]: Started sshd@11-10.128.0.22:22-139.178.89.65:46520.service - OpenSSH per-connection server daemon (139.178.89.65:46520). Nov 6 23:38:41.093287 sshd[4117]: Accepted publickey for core from 139.178.89.65 port 46520 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:38:41.095007 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:38:41.102500 systemd-logind[1459]: New session 12 of user core. Nov 6 23:38:41.113847 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 23:38:41.390657 sshd[4119]: Connection closed by 139.178.89.65 port 46520 Nov 6 23:38:41.391973 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:41.396217 systemd[1]: sshd@11-10.128.0.22:22-139.178.89.65:46520.service: Deactivated successfully. Nov 6 23:38:41.399153 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 23:38:41.401795 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Nov 6 23:38:41.403411 systemd-logind[1459]: Removed session 12. Nov 6 23:38:46.450922 systemd[1]: Started sshd@12-10.128.0.22:22-139.178.89.65:54354.service - OpenSSH per-connection server daemon (139.178.89.65:54354). Nov 6 23:38:46.755348 sshd[4134]: Accepted publickey for core from 139.178.89.65 port 54354 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:38:46.756231 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:38:46.762599 systemd-logind[1459]: New session 13 of user core. Nov 6 23:38:46.770814 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 23:38:47.059749 sshd[4136]: Connection closed by 139.178.89.65 port 54354 Nov 6 23:38:47.061056 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:47.067169 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Nov 6 23:38:47.068194 systemd[1]: sshd@12-10.128.0.22:22-139.178.89.65:54354.service: Deactivated successfully. Nov 6 23:38:47.071082 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 23:38:47.072690 systemd-logind[1459]: Removed session 13. Nov 6 23:38:52.118952 systemd[1]: Started sshd@13-10.128.0.22:22-139.178.89.65:54366.service - OpenSSH per-connection server daemon (139.178.89.65:54366). Nov 6 23:38:52.413279 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 54366 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:38:52.415999 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:38:52.426582 systemd-logind[1459]: New session 14 of user core. Nov 6 23:38:52.433748 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 23:38:52.710517 sshd[4151]: Connection closed by 139.178.89.65 port 54366 Nov 6 23:38:52.711563 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:52.716906 systemd[1]: sshd@13-10.128.0.22:22-139.178.89.65:54366.service: Deactivated successfully. Nov 6 23:38:52.720439 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 23:38:52.721763 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Nov 6 23:38:52.723368 systemd-logind[1459]: Removed session 14. Nov 6 23:38:52.773959 systemd[1]: Started sshd@14-10.128.0.22:22-139.178.89.65:54372.service - OpenSSH per-connection server daemon (139.178.89.65:54372). Nov 6 23:38:53.065825 sshd[4164]: Accepted publickey for core from 139.178.89.65 port 54372 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:38:53.067345 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:38:53.073532 systemd-logind[1459]: New session 15 of user core. Nov 6 23:38:53.087776 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 23:38:53.450227 sshd[4166]: Connection closed by 139.178.89.65 port 54372 Nov 6 23:38:53.453724 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:53.461235 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Nov 6 23:38:53.463983 systemd[1]: sshd@14-10.128.0.22:22-139.178.89.65:54372.service: Deactivated successfully. Nov 6 23:38:53.467703 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 23:38:53.470089 systemd-logind[1459]: Removed session 15. Nov 6 23:38:53.512940 systemd[1]: Started sshd@15-10.128.0.22:22-139.178.89.65:54384.service - OpenSSH per-connection server daemon (139.178.89.65:54384). Nov 6 23:38:53.807789 sshd[4176]: Accepted publickey for core from 139.178.89.65 port 54384 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:38:53.810070 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:38:53.817815 systemd-logind[1459]: New session 16 of user core. Nov 6 23:38:53.824805 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 23:38:54.107164 sshd[4178]: Connection closed by 139.178.89.65 port 54384 Nov 6 23:38:54.108320 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:54.114260 systemd[1]: sshd@15-10.128.0.22:22-139.178.89.65:54384.service: Deactivated successfully. Nov 6 23:38:54.117126 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 23:38:54.118375 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Nov 6 23:38:54.120285 systemd-logind[1459]: Removed session 16. Nov 6 23:38:59.164999 systemd[1]: Started sshd@16-10.128.0.22:22-139.178.89.65:35970.service - OpenSSH per-connection server daemon (139.178.89.65:35970). Nov 6 23:38:59.466062 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 35970 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:38:59.468135 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:38:59.474678 systemd-logind[1459]: New session 17 of user core. Nov 6 23:38:59.484772 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 23:38:59.760807 sshd[4193]: Connection closed by 139.178.89.65 port 35970 Nov 6 23:38:59.762196 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Nov 6 23:38:59.767644 systemd[1]: sshd@16-10.128.0.22:22-139.178.89.65:35970.service: Deactivated successfully. Nov 6 23:38:59.770968 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 23:38:59.772809 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Nov 6 23:38:59.774820 systemd-logind[1459]: Removed session 17. Nov 6 23:39:04.823931 systemd[1]: Started sshd@17-10.128.0.22:22-139.178.89.65:35982.service - OpenSSH per-connection server daemon (139.178.89.65:35982). Nov 6 23:39:05.128390 sshd[4205]: Accepted publickey for core from 139.178.89.65 port 35982 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:05.130213 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:05.136481 systemd-logind[1459]: New session 18 of user core. Nov 6 23:39:05.144740 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 23:39:05.430891 sshd[4207]: Connection closed by 139.178.89.65 port 35982 Nov 6 23:39:05.431990 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:05.436855 systemd[1]: sshd@17-10.128.0.22:22-139.178.89.65:35982.service: Deactivated successfully. Nov 6 23:39:05.440534 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 23:39:05.442714 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Nov 6 23:39:05.444486 systemd-logind[1459]: Removed session 18. Nov 6 23:39:05.493015 systemd[1]: Started sshd@18-10.128.0.22:22-139.178.89.65:35998.service - OpenSSH per-connection server daemon (139.178.89.65:35998). Nov 6 23:39:05.780994 sshd[4219]: Accepted publickey for core from 139.178.89.65 port 35998 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:05.782843 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:05.790251 systemd-logind[1459]: New session 19 of user core. Nov 6 23:39:05.796763 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 23:39:06.189054 sshd[4221]: Connection closed by 139.178.89.65 port 35998 Nov 6 23:39:06.189865 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:06.194705 systemd[1]: sshd@18-10.128.0.22:22-139.178.89.65:35998.service: Deactivated successfully. Nov 6 23:39:06.198169 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 23:39:06.201039 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Nov 6 23:39:06.203116 systemd-logind[1459]: Removed session 19. Nov 6 23:39:06.247232 systemd[1]: Started sshd@19-10.128.0.22:22-139.178.89.65:46978.service - OpenSSH per-connection server daemon (139.178.89.65:46978). Nov 6 23:39:06.550363 sshd[4230]: Accepted publickey for core from 139.178.89.65 port 46978 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:06.551298 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:06.559697 systemd-logind[1459]: New session 20 of user core. Nov 6 23:39:06.564756 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 23:39:07.500373 sshd[4232]: Connection closed by 139.178.89.65 port 46978 Nov 6 23:39:07.503518 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:07.512726 systemd[1]: sshd@19-10.128.0.22:22-139.178.89.65:46978.service: Deactivated successfully. Nov 6 23:39:07.518380 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 23:39:07.521050 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Nov 6 23:39:07.523897 systemd-logind[1459]: Removed session 20. Nov 6 23:39:07.565013 systemd[1]: Started sshd@20-10.128.0.22:22-139.178.89.65:46992.service - OpenSSH per-connection server daemon (139.178.89.65:46992). Nov 6 23:39:07.852662 sshd[4249]: Accepted publickey for core from 139.178.89.65 port 46992 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:07.854817 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:07.866481 systemd-logind[1459]: New session 21 of user core. Nov 6 23:39:07.874745 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 23:39:08.285210 sshd[4251]: Connection closed by 139.178.89.65 port 46992 Nov 6 23:39:08.286244 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:08.292097 systemd[1]: sshd@20-10.128.0.22:22-139.178.89.65:46992.service: Deactivated successfully. Nov 6 23:39:08.296263 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 23:39:08.297755 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Nov 6 23:39:08.299548 systemd-logind[1459]: Removed session 21. Nov 6 23:39:08.345925 systemd[1]: Started sshd@21-10.128.0.22:22-139.178.89.65:47002.service - OpenSSH per-connection server daemon (139.178.89.65:47002). Nov 6 23:39:08.652194 sshd[4261]: Accepted publickey for core from 139.178.89.65 port 47002 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:08.654038 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:08.660287 systemd-logind[1459]: New session 22 of user core. Nov 6 23:39:08.665689 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 23:39:08.945715 sshd[4263]: Connection closed by 139.178.89.65 port 47002 Nov 6 23:39:08.947109 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:08.952896 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Nov 6 23:39:08.954016 systemd[1]: sshd@21-10.128.0.22:22-139.178.89.65:47002.service: Deactivated successfully. Nov 6 23:39:08.957535 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 23:39:08.959189 systemd-logind[1459]: Removed session 22. Nov 6 23:39:14.004917 systemd[1]: Started sshd@22-10.128.0.22:22-139.178.89.65:47018.service - OpenSSH per-connection server daemon (139.178.89.65:47018). Nov 6 23:39:14.314746 sshd[4277]: Accepted publickey for core from 139.178.89.65 port 47018 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:14.317132 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:14.325020 systemd-logind[1459]: New session 23 of user core. Nov 6 23:39:14.335766 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 23:39:14.608830 sshd[4281]: Connection closed by 139.178.89.65 port 47018 Nov 6 23:39:14.610169 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:14.616087 systemd[1]: sshd@22-10.128.0.22:22-139.178.89.65:47018.service: Deactivated successfully. Nov 6 23:39:14.619913 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 23:39:14.621202 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Nov 6 23:39:14.622882 systemd-logind[1459]: Removed session 23. Nov 6 23:39:19.667967 systemd[1]: Started sshd@23-10.128.0.22:22-139.178.89.65:40248.service - OpenSSH per-connection server daemon (139.178.89.65:40248). Nov 6 23:39:19.975613 sshd[4292]: Accepted publickey for core from 139.178.89.65 port 40248 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:19.977331 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:19.983909 systemd-logind[1459]: New session 24 of user core. Nov 6 23:39:19.992046 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 23:39:20.276241 sshd[4294]: Connection closed by 139.178.89.65 port 40248 Nov 6 23:39:20.277577 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:20.282934 systemd[1]: sshd@23-10.128.0.22:22-139.178.89.65:40248.service: Deactivated successfully. Nov 6 23:39:20.286136 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 23:39:20.287412 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Nov 6 23:39:20.289119 systemd-logind[1459]: Removed session 24. Nov 6 23:39:25.338919 systemd[1]: Started sshd@24-10.128.0.22:22-139.178.89.65:40252.service - OpenSSH per-connection server daemon (139.178.89.65:40252). Nov 6 23:39:25.638647 sshd[4306]: Accepted publickey for core from 139.178.89.65 port 40252 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:25.640390 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:25.646554 systemd-logind[1459]: New session 25 of user core. Nov 6 23:39:25.655749 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 23:39:25.932999 sshd[4308]: Connection closed by 139.178.89.65 port 40252 Nov 6 23:39:25.934168 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:25.939809 systemd[1]: sshd@24-10.128.0.22:22-139.178.89.65:40252.service: Deactivated successfully. Nov 6 23:39:25.943148 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 23:39:25.944426 systemd-logind[1459]: Session 25 logged out. Waiting for processes to exit. Nov 6 23:39:25.945901 systemd-logind[1459]: Removed session 25. Nov 6 23:39:25.996906 systemd[1]: Started sshd@25-10.128.0.22:22-139.178.89.65:40268.service - OpenSSH per-connection server daemon (139.178.89.65:40268). Nov 6 23:39:26.299102 sshd[4320]: Accepted publickey for core from 139.178.89.65 port 40268 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:26.300675 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:26.306692 systemd-logind[1459]: New session 26 of user core. Nov 6 23:39:26.313713 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 23:39:29.145050 containerd[1471]: time="2025-11-06T23:39:29.144620005Z" level=info msg="StopContainer for \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\" with timeout 30 (s)" Nov 6 23:39:29.147729 containerd[1471]: time="2025-11-06T23:39:29.147113470Z" level=info msg="Stop container \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\" with signal terminated" Nov 6 23:39:29.168416 containerd[1471]: time="2025-11-06T23:39:29.168347245Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:39:29.191081 containerd[1471]: time="2025-11-06T23:39:29.191029395Z" level=info msg="StopContainer for \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\" with timeout 2 (s)" Nov 6 23:39:29.192928 containerd[1471]: time="2025-11-06T23:39:29.192574389Z" level=info msg="Stop container \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\" with signal terminated" Nov 6 23:39:29.209675 systemd-networkd[1382]: lxc_health: Link DOWN Nov 6 23:39:29.209686 systemd-networkd[1382]: lxc_health: Lost carrier Nov 6 23:39:29.232555 systemd[1]: cri-containerd-8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201.scope: Deactivated successfully. Nov 6 23:39:29.254511 systemd[1]: cri-containerd-5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0.scope: Deactivated successfully. Nov 6 23:39:29.255006 systemd[1]: cri-containerd-5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0.scope: Consumed 9.980s CPU time, 126.7M memory peak, 144K read from disk, 13.3M written to disk. Nov 6 23:39:29.328420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201-rootfs.mount: Deactivated successfully. Nov 6 23:39:29.345327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0-rootfs.mount: Deactivated successfully. Nov 6 23:39:29.360224 containerd[1471]: time="2025-11-06T23:39:29.360113713Z" level=info msg="shim disconnected" id=8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201 namespace=k8s.io Nov 6 23:39:29.360224 containerd[1471]: time="2025-11-06T23:39:29.360193972Z" level=warning msg="cleaning up after shim disconnected" id=8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201 namespace=k8s.io Nov 6 23:39:29.360224 containerd[1471]: time="2025-11-06T23:39:29.360209415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:29.361503 containerd[1471]: time="2025-11-06T23:39:29.361277312Z" level=info msg="shim disconnected" id=5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0 namespace=k8s.io Nov 6 23:39:29.361503 containerd[1471]: time="2025-11-06T23:39:29.361371401Z" level=warning msg="cleaning up after shim disconnected" id=5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0 namespace=k8s.io Nov 6 23:39:29.361503 containerd[1471]: time="2025-11-06T23:39:29.361386063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:29.386762 containerd[1471]: time="2025-11-06T23:39:29.386571129Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:39:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:39:29.389817 containerd[1471]: time="2025-11-06T23:39:29.389753472Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:39:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:39:29.395237 containerd[1471]: time="2025-11-06T23:39:29.394962957Z" level=info msg="StopContainer for \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\" returns successfully" Nov 6 23:39:29.398509 containerd[1471]: time="2025-11-06T23:39:29.398354136Z" level=info msg="StopContainer for \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\" returns successfully" Nov 6 23:39:29.398714 containerd[1471]: time="2025-11-06T23:39:29.398515253Z" level=info msg="StopPodSandbox for \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\"" Nov 6 23:39:29.398714 containerd[1471]: time="2025-11-06T23:39:29.398595283Z" level=info msg="Container to stop \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:29.398714 containerd[1471]: time="2025-11-06T23:39:29.398654044Z" level=info msg="Container to stop \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:29.398714 containerd[1471]: time="2025-11-06T23:39:29.398675445Z" level=info msg="Container to stop \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:29.398714 containerd[1471]: time="2025-11-06T23:39:29.398697028Z" level=info msg="Container to stop \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:29.398714 containerd[1471]: time="2025-11-06T23:39:29.398712665Z" level=info msg="Container to stop \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:29.402915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece-shm.mount: Deactivated successfully. Nov 6 23:39:29.405704 containerd[1471]: time="2025-11-06T23:39:29.405378754Z" level=info msg="StopPodSandbox for \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\"" Nov 6 23:39:29.407593 containerd[1471]: time="2025-11-06T23:39:29.407498624Z" level=info msg="Container to stop \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:39:29.424324 systemd[1]: cri-containerd-df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece.scope: Deactivated successfully. Nov 6 23:39:29.430233 systemd[1]: cri-containerd-26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176.scope: Deactivated successfully. Nov 6 23:39:29.495238 containerd[1471]: time="2025-11-06T23:39:29.494872056Z" level=info msg="shim disconnected" id=26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176 namespace=k8s.io Nov 6 23:39:29.495238 containerd[1471]: time="2025-11-06T23:39:29.494978588Z" level=warning msg="cleaning up after shim disconnected" id=26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176 namespace=k8s.io Nov 6 23:39:29.495238 containerd[1471]: time="2025-11-06T23:39:29.495018368Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:29.497008 containerd[1471]: time="2025-11-06T23:39:29.496922420Z" level=info msg="shim disconnected" id=df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece namespace=k8s.io Nov 6 23:39:29.497008 containerd[1471]: time="2025-11-06T23:39:29.496983955Z" level=warning msg="cleaning up after shim disconnected" id=df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece namespace=k8s.io Nov 6 23:39:29.497405 containerd[1471]: time="2025-11-06T23:39:29.496997669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:29.526227 containerd[1471]: time="2025-11-06T23:39:29.526155310Z" level=info msg="TearDown network for sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" successfully" Nov 6 23:39:29.526227 containerd[1471]: time="2025-11-06T23:39:29.526220873Z" level=info msg="StopPodSandbox for \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" returns successfully" Nov 6 23:39:29.527528 containerd[1471]: time="2025-11-06T23:39:29.526663403Z" level=warning msg="cleanup warnings time=\"2025-11-06T23:39:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 6 23:39:29.528893 containerd[1471]: time="2025-11-06T23:39:29.528852462Z" level=info msg="TearDown network for sandbox \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\" successfully" Nov 6 23:39:29.529053 containerd[1471]: time="2025-11-06T23:39:29.529027238Z" level=info msg="StopPodSandbox for \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\" returns successfully" Nov 6 23:39:29.651092 kubelet[2705]: I1106 23:39:29.650907 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-hostproc\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.651092 kubelet[2705]: I1106 23:39:29.650973 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cilium-cgroup\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.651092 kubelet[2705]: I1106 23:39:29.651011 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83df6523-ec5e-46af-8792-b01a49937de4-clustermesh-secrets\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.651092 kubelet[2705]: I1106 23:39:29.651037 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-host-proc-sys-kernel\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.651092 kubelet[2705]: I1106 23:39:29.651067 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw6g8\" (UniqueName: \"kubernetes.io/projected/83df6523-ec5e-46af-8792-b01a49937de4-kube-api-access-jw6g8\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.651092 kubelet[2705]: I1106 23:39:29.651095 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cilium-run\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.652153 kubelet[2705]: I1106 23:39:29.651118 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-xtables-lock\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.652153 kubelet[2705]: I1106 23:39:29.651142 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83df6523-ec5e-46af-8792-b01a49937de4-cilium-config-path\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.652153 kubelet[2705]: I1106 23:39:29.651167 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cni-path\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.652153 kubelet[2705]: I1106 23:39:29.651195 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66tw2\" (UniqueName: \"kubernetes.io/projected/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3-kube-api-access-66tw2\") pod \"8a1f9b4f-6ab5-4a17-9c76-891197dc40b3\" (UID: \"8a1f9b4f-6ab5-4a17-9c76-891197dc40b3\") " Nov 6 23:39:29.652153 kubelet[2705]: I1106 23:39:29.651224 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-etc-cni-netd\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.652153 kubelet[2705]: I1106 23:39:29.651249 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-host-proc-sys-net\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.654538 kubelet[2705]: I1106 23:39:29.651278 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-bpf-maps\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.654538 kubelet[2705]: I1106 23:39:29.651305 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3-cilium-config-path\") pod \"8a1f9b4f-6ab5-4a17-9c76-891197dc40b3\" (UID: \"8a1f9b4f-6ab5-4a17-9c76-891197dc40b3\") " Nov 6 23:39:29.654538 kubelet[2705]: I1106 23:39:29.651331 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83df6523-ec5e-46af-8792-b01a49937de4-hubble-tls\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.654538 kubelet[2705]: I1106 23:39:29.651355 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-lib-modules\") pod \"83df6523-ec5e-46af-8792-b01a49937de4\" (UID: \"83df6523-ec5e-46af-8792-b01a49937de4\") " Nov 6 23:39:29.654538 kubelet[2705]: I1106 23:39:29.651501 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.654538 kubelet[2705]: I1106 23:39:29.651566 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-hostproc" (OuterVolumeSpecName: "hostproc") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.654963 kubelet[2705]: I1106 23:39:29.651592 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.654963 kubelet[2705]: I1106 23:39:29.652807 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cni-path" (OuterVolumeSpecName: "cni-path") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.654963 kubelet[2705]: I1106 23:39:29.652911 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.657526 kubelet[2705]: I1106 23:39:29.655992 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.657526 kubelet[2705]: I1106 23:39:29.656049 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.657526 kubelet[2705]: I1106 23:39:29.656095 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.657526 kubelet[2705]: I1106 23:39:29.656144 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.657526 kubelet[2705]: I1106 23:39:29.656171 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:39:29.663681 kubelet[2705]: I1106 23:39:29.663634 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83df6523-ec5e-46af-8792-b01a49937de4-kube-api-access-jw6g8" (OuterVolumeSpecName: "kube-api-access-jw6g8") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "kube-api-access-jw6g8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:39:29.663828 kubelet[2705]: I1106 23:39:29.663791 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83df6523-ec5e-46af-8792-b01a49937de4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 23:39:29.663994 kubelet[2705]: I1106 23:39:29.663955 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a1f9b4f-6ab5-4a17-9c76-891197dc40b3" (UID: "8a1f9b4f-6ab5-4a17-9c76-891197dc40b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:39:29.665190 kubelet[2705]: I1106 23:39:29.665139 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83df6523-ec5e-46af-8792-b01a49937de4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:39:29.667136 kubelet[2705]: I1106 23:39:29.667101 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83df6523-ec5e-46af-8792-b01a49937de4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "83df6523-ec5e-46af-8792-b01a49937de4" (UID: "83df6523-ec5e-46af-8792-b01a49937de4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:39:29.667136 kubelet[2705]: I1106 23:39:29.667114 2705 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3-kube-api-access-66tw2" (OuterVolumeSpecName: "kube-api-access-66tw2") pod "8a1f9b4f-6ab5-4a17-9c76-891197dc40b3" (UID: "8a1f9b4f-6ab5-4a17-9c76-891197dc40b3"). InnerVolumeSpecName "kube-api-access-66tw2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:39:29.752770 kubelet[2705]: I1106 23:39:29.752720 2705 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-etc-cni-netd\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753573 kubelet[2705]: I1106 23:39:29.753013 2705 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-host-proc-sys-net\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753573 kubelet[2705]: I1106 23:39:29.753391 2705 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-bpf-maps\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753573 kubelet[2705]: I1106 23:39:29.753416 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3-cilium-config-path\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753573 kubelet[2705]: I1106 23:39:29.753433 2705 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83df6523-ec5e-46af-8792-b01a49937de4-hubble-tls\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753573 kubelet[2705]: I1106 23:39:29.753452 2705 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-lib-modules\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753573 kubelet[2705]: I1106 23:39:29.753575 2705 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-hostproc\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753935 kubelet[2705]: I1106 23:39:29.753597 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cilium-cgroup\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753935 kubelet[2705]: I1106 23:39:29.753634 2705 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83df6523-ec5e-46af-8792-b01a49937de4-clustermesh-secrets\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753935 kubelet[2705]: I1106 23:39:29.753652 2705 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-host-proc-sys-kernel\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753935 kubelet[2705]: I1106 23:39:29.753668 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jw6g8\" (UniqueName: \"kubernetes.io/projected/83df6523-ec5e-46af-8792-b01a49937de4-kube-api-access-jw6g8\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753935 kubelet[2705]: I1106 23:39:29.753685 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cilium-run\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753935 kubelet[2705]: I1106 23:39:29.753700 2705 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-xtables-lock\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.753935 kubelet[2705]: I1106 23:39:29.753717 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83df6523-ec5e-46af-8792-b01a49937de4-cilium-config-path\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.754151 kubelet[2705]: I1106 23:39:29.753735 2705 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83df6523-ec5e-46af-8792-b01a49937de4-cni-path\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:29.754151 kubelet[2705]: I1106 23:39:29.753756 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-66tw2\" (UniqueName: \"kubernetes.io/projected/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3-kube-api-access-66tw2\") on node \"ci-4230-2-4-nightly-20251106-2100-01d38e81a79945a96acf\" DevicePath \"\"" Nov 6 23:39:30.019862 kubelet[2705]: I1106 23:39:30.019569 2705 scope.go:117] "RemoveContainer" containerID="5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0" Nov 6 23:39:30.022758 containerd[1471]: time="2025-11-06T23:39:30.021178870Z" level=info msg="RemoveContainer for \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\"" Nov 6 23:39:30.030076 containerd[1471]: time="2025-11-06T23:39:30.030016303Z" level=info msg="RemoveContainer for \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\" returns successfully" Nov 6 23:39:30.031521 kubelet[2705]: I1106 23:39:30.031490 2705 scope.go:117] "RemoveContainer" containerID="d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446" Nov 6 23:39:30.034279 systemd[1]: Removed slice kubepods-burstable-pod83df6523_ec5e_46af_8792_b01a49937de4.slice - libcontainer container kubepods-burstable-pod83df6523_ec5e_46af_8792_b01a49937de4.slice. Nov 6 23:39:30.034977 systemd[1]: kubepods-burstable-pod83df6523_ec5e_46af_8792_b01a49937de4.slice: Consumed 10.119s CPU time, 127.1M memory peak, 144K read from disk, 13.3M written to disk. Nov 6 23:39:30.039772 containerd[1471]: time="2025-11-06T23:39:30.039188244Z" level=info msg="RemoveContainer for \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\"" Nov 6 23:39:30.042357 systemd[1]: Removed slice kubepods-besteffort-pod8a1f9b4f_6ab5_4a17_9c76_891197dc40b3.slice - libcontainer container kubepods-besteffort-pod8a1f9b4f_6ab5_4a17_9c76_891197dc40b3.slice. Nov 6 23:39:30.047207 containerd[1471]: time="2025-11-06T23:39:30.047136288Z" level=info msg="RemoveContainer for \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\" returns successfully" Nov 6 23:39:30.047410 kubelet[2705]: I1106 23:39:30.047389 2705 scope.go:117] "RemoveContainer" containerID="b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e" Nov 6 23:39:30.049881 containerd[1471]: time="2025-11-06T23:39:30.049846886Z" level=info msg="RemoveContainer for \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\"" Nov 6 23:39:30.055287 containerd[1471]: time="2025-11-06T23:39:30.055238270Z" level=info msg="RemoveContainer for \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\" returns successfully" Nov 6 23:39:30.055705 kubelet[2705]: I1106 23:39:30.055660 2705 scope.go:117] "RemoveContainer" containerID="eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71" Nov 6 23:39:30.057375 containerd[1471]: time="2025-11-06T23:39:30.057210367Z" level=info msg="RemoveContainer for \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\"" Nov 6 23:39:30.068771 containerd[1471]: time="2025-11-06T23:39:30.068712871Z" level=info msg="RemoveContainer for \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\" returns successfully" Nov 6 23:39:30.069373 kubelet[2705]: I1106 23:39:30.069265 2705 scope.go:117] "RemoveContainer" containerID="9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251" Nov 6 23:39:30.071316 containerd[1471]: time="2025-11-06T23:39:30.071277107Z" level=info msg="RemoveContainer for \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\"" Nov 6 23:39:30.076365 containerd[1471]: time="2025-11-06T23:39:30.076192314Z" level=info msg="RemoveContainer for \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\" returns successfully" Nov 6 23:39:30.077479 kubelet[2705]: I1106 23:39:30.077424 2705 scope.go:117] "RemoveContainer" containerID="5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0" Nov 6 23:39:30.077987 containerd[1471]: time="2025-11-06T23:39:30.077813458Z" level=error msg="ContainerStatus for \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\": not found" Nov 6 23:39:30.078667 kubelet[2705]: E1106 23:39:30.078618 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\": not found" containerID="5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0" Nov 6 23:39:30.078798 kubelet[2705]: I1106 23:39:30.078672 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0"} err="failed to get container status \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b2f7ffaedbdd5a79c29dc2f9bf4900a0b566dd1d4c443056e80d3b559eca5f0\": not found" Nov 6 23:39:30.078798 kubelet[2705]: I1106 23:39:30.078729 2705 scope.go:117] "RemoveContainer" containerID="d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446" Nov 6 23:39:30.079126 containerd[1471]: time="2025-11-06T23:39:30.078977705Z" level=error msg="ContainerStatus for \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\": not found" Nov 6 23:39:30.079706 kubelet[2705]: E1106 23:39:30.079450 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\": not found" containerID="d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446" Nov 6 23:39:30.079706 kubelet[2705]: I1106 23:39:30.079528 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446"} err="failed to get container status \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5db1eb36aea2a6e20e840a6e8a68ae62030d38d2670232177a111365c7d4446\": not found" Nov 6 23:39:30.079706 kubelet[2705]: I1106 23:39:30.079557 2705 scope.go:117] "RemoveContainer" containerID="b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e" Nov 6 23:39:30.079915 containerd[1471]: time="2025-11-06T23:39:30.079826411Z" level=error msg="ContainerStatus for \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\": not found" Nov 6 23:39:30.080022 kubelet[2705]: E1106 23:39:30.079987 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\": not found" containerID="b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e" Nov 6 23:39:30.080087 kubelet[2705]: I1106 23:39:30.080026 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e"} err="failed to get container status \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5a98a89271631d6ffbc681a28f0b8a0242344b9dc2b9e3f82de20540dc0da9e\": not found" Nov 6 23:39:30.080087 kubelet[2705]: I1106 23:39:30.080055 2705 scope.go:117] "RemoveContainer" containerID="eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71" Nov 6 23:39:30.080317 containerd[1471]: time="2025-11-06T23:39:30.080254965Z" level=error msg="ContainerStatus for \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\": not found" Nov 6 23:39:30.080554 kubelet[2705]: E1106 23:39:30.080527 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\": not found" containerID="eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71" Nov 6 23:39:30.080680 kubelet[2705]: I1106 23:39:30.080560 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71"} err="failed to get container status \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\": rpc error: code = NotFound desc = an error occurred when try to find container \"eae728bb4ccfa748224376ee3a559d48d295f572f477faae58acfb0d5367aa71\": not found" Nov 6 23:39:30.080680 kubelet[2705]: I1106 23:39:30.080583 2705 scope.go:117] "RemoveContainer" containerID="9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251" Nov 6 23:39:30.080840 containerd[1471]: time="2025-11-06T23:39:30.080799419Z" level=error msg="ContainerStatus for \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\": not found" Nov 6 23:39:30.080985 kubelet[2705]: E1106 23:39:30.080962 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\": not found" containerID="9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251" Nov 6 23:39:30.081055 kubelet[2705]: I1106 23:39:30.080994 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251"} err="failed to get container status \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bf4941125d2de948b5581e76dde09a6c0366118c0e88ba49a53707755690251\": not found" Nov 6 23:39:30.081055 kubelet[2705]: I1106 23:39:30.081019 2705 scope.go:117] "RemoveContainer" containerID="8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201" Nov 6 23:39:30.082392 containerd[1471]: time="2025-11-06T23:39:30.082275032Z" level=info msg="RemoveContainer for \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\"" Nov 6 23:39:30.087339 containerd[1471]: time="2025-11-06T23:39:30.087282048Z" level=info msg="RemoveContainer for \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\" returns successfully" Nov 6 23:39:30.087558 kubelet[2705]: I1106 23:39:30.087518 2705 scope.go:117] "RemoveContainer" containerID="8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201" Nov 6 23:39:30.087806 containerd[1471]: time="2025-11-06T23:39:30.087756791Z" level=error msg="ContainerStatus for \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\": not found" Nov 6 23:39:30.088032 kubelet[2705]: E1106 23:39:30.087985 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\": not found" containerID="8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201" Nov 6 23:39:30.088125 kubelet[2705]: I1106 23:39:30.088020 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201"} err="failed to get container status \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b66c12d867495fd8d89a60fc8e40b207b1918f8962cf0e61b497a7f193ab201\": not found" Nov 6 23:39:30.119144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176-rootfs.mount: Deactivated successfully. Nov 6 23:39:30.119320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece-rootfs.mount: Deactivated successfully. Nov 6 23:39:30.119422 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176-shm.mount: Deactivated successfully. Nov 6 23:39:30.119579 systemd[1]: var-lib-kubelet-pods-8a1f9b4f\x2d6ab5\x2d4a17\x2d9c76\x2d891197dc40b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d66tw2.mount: Deactivated successfully. Nov 6 23:39:30.119698 systemd[1]: var-lib-kubelet-pods-83df6523\x2dec5e\x2d46af\x2d8792\x2db01a49937de4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djw6g8.mount: Deactivated successfully. Nov 6 23:39:30.119820 systemd[1]: var-lib-kubelet-pods-83df6523\x2dec5e\x2d46af\x2d8792\x2db01a49937de4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 23:39:30.119926 systemd[1]: var-lib-kubelet-pods-83df6523\x2dec5e\x2d46af\x2d8792\x2db01a49937de4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 23:39:30.605329 kubelet[2705]: I1106 23:39:30.605279 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83df6523-ec5e-46af-8792-b01a49937de4" path="/var/lib/kubelet/pods/83df6523-ec5e-46af-8792-b01a49937de4/volumes" Nov 6 23:39:30.607022 kubelet[2705]: I1106 23:39:30.606979 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1f9b4f-6ab5-4a17-9c76-891197dc40b3" path="/var/lib/kubelet/pods/8a1f9b4f-6ab5-4a17-9c76-891197dc40b3/volumes" Nov 6 23:39:31.079165 sshd[4322]: Connection closed by 139.178.89.65 port 40268 Nov 6 23:39:31.080370 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:31.086546 systemd[1]: sshd@25-10.128.0.22:22-139.178.89.65:40268.service: Deactivated successfully. Nov 6 23:39:31.090641 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 23:39:31.091156 systemd[1]: session-26.scope: Consumed 2.016s CPU time, 23.8M memory peak. Nov 6 23:39:31.092307 systemd-logind[1459]: Session 26 logged out. Waiting for processes to exit. Nov 6 23:39:31.094565 systemd-logind[1459]: Removed session 26. Nov 6 23:39:31.140995 systemd[1]: Started sshd@26-10.128.0.22:22-139.178.89.65:34292.service - OpenSSH per-connection server daemon (139.178.89.65:34292). Nov 6 23:39:31.446978 sshd[4482]: Accepted publickey for core from 139.178.89.65 port 34292 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:31.448835 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:31.456492 systemd-logind[1459]: New session 27 of user core. Nov 6 23:39:31.461791 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 23:39:31.763367 kubelet[2705]: E1106 23:39:31.763097 2705 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:39:32.086433 ntpd[1441]: Deleting interface #11 lxc_health, fe80::1c82:4eff:fef1:5e0e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Nov 6 23:39:32.087338 ntpd[1441]: 6 Nov 23:39:32 ntpd[1441]: Deleting interface #11 lxc_health, fe80::1c82:4eff:fef1:5e0e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Nov 6 23:39:32.237037 sshd[4484]: Connection closed by 139.178.89.65 port 34292 Nov 6 23:39:32.238786 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:32.254246 systemd[1]: sshd@26-10.128.0.22:22-139.178.89.65:34292.service: Deactivated successfully. Nov 6 23:39:32.262633 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 23:39:32.267316 systemd-logind[1459]: Session 27 logged out. Waiting for processes to exit. Nov 6 23:39:32.271452 systemd[1]: Created slice kubepods-burstable-pod75f10eaf_7cd4_43a2_811f_0d3e035905a5.slice - libcontainer container kubepods-burstable-pod75f10eaf_7cd4_43a2_811f_0d3e035905a5.slice. Nov 6 23:39:32.277372 systemd-logind[1459]: Removed session 27. Nov 6 23:39:32.309999 systemd[1]: Started sshd@27-10.128.0.22:22-139.178.89.65:34298.service - OpenSSH per-connection server daemon (139.178.89.65:34298). Nov 6 23:39:32.373788 kubelet[2705]: I1106 23:39:32.373588 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-cni-path\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.373788 kubelet[2705]: I1106 23:39:32.373660 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75f10eaf-7cd4-43a2-811f-0d3e035905a5-hubble-tls\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.373788 kubelet[2705]: I1106 23:39:32.373696 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-host-proc-sys-net\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.374410 kubelet[2705]: I1106 23:39:32.373758 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-bpf-maps\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.374410 kubelet[2705]: I1106 23:39:32.374181 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75f10eaf-7cd4-43a2-811f-0d3e035905a5-cilium-ipsec-secrets\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.374410 kubelet[2705]: I1106 23:39:32.374219 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-host-proc-sys-kernel\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.374410 kubelet[2705]: I1106 23:39:32.374268 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj76x\" (UniqueName: \"kubernetes.io/projected/75f10eaf-7cd4-43a2-811f-0d3e035905a5-kube-api-access-bj76x\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.374410 kubelet[2705]: I1106 23:39:32.374307 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-cilium-run\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.375197 kubelet[2705]: I1106 23:39:32.374770 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-cilium-cgroup\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.375197 kubelet[2705]: I1106 23:39:32.374871 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-etc-cni-netd\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.375197 kubelet[2705]: I1106 23:39:32.374910 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75f10eaf-7cd4-43a2-811f-0d3e035905a5-cilium-config-path\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.375197 kubelet[2705]: I1106 23:39:32.374986 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75f10eaf-7cd4-43a2-811f-0d3e035905a5-clustermesh-secrets\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.375197 kubelet[2705]: I1106 23:39:32.375023 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-hostproc\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.375197 kubelet[2705]: I1106 23:39:32.375127 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-lib-modules\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.375597 kubelet[2705]: I1106 23:39:32.375191 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75f10eaf-7cd4-43a2-811f-0d3e035905a5-xtables-lock\") pod \"cilium-px727\" (UID: \"75f10eaf-7cd4-43a2-811f-0d3e035905a5\") " pod="kube-system/cilium-px727" Nov 6 23:39:32.581154 containerd[1471]: time="2025-11-06T23:39:32.580823608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-px727,Uid:75f10eaf-7cd4-43a2-811f-0d3e035905a5,Namespace:kube-system,Attempt:0,}" Nov 6 23:39:32.620457 containerd[1471]: time="2025-11-06T23:39:32.620058911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:39:32.620457 containerd[1471]: time="2025-11-06T23:39:32.620167845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:39:32.620457 containerd[1471]: time="2025-11-06T23:39:32.620195983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:39:32.620457 containerd[1471]: time="2025-11-06T23:39:32.620372712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:39:32.645603 sshd[4494]: Accepted publickey for core from 139.178.89.65 port 34298 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:32.647668 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:32.649740 systemd[1]: Started cri-containerd-06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4.scope - libcontainer container 06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4. Nov 6 23:39:32.659883 systemd-logind[1459]: New session 28 of user core. Nov 6 23:39:32.667933 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 23:39:32.698868 containerd[1471]: time="2025-11-06T23:39:32.698784746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-px727,Uid:75f10eaf-7cd4-43a2-811f-0d3e035905a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\"" Nov 6 23:39:32.709060 containerd[1471]: time="2025-11-06T23:39:32.708887928Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:39:32.726266 containerd[1471]: time="2025-11-06T23:39:32.726193272Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb8860149b1f8a9aa233fdad95e22b57a600c47265fb00f9193a653888fca3d2\"" Nov 6 23:39:32.729334 containerd[1471]: time="2025-11-06T23:39:32.728769997Z" level=info msg="StartContainer for \"eb8860149b1f8a9aa233fdad95e22b57a600c47265fb00f9193a653888fca3d2\"" Nov 6 23:39:32.768756 systemd[1]: Started cri-containerd-eb8860149b1f8a9aa233fdad95e22b57a600c47265fb00f9193a653888fca3d2.scope - libcontainer container eb8860149b1f8a9aa233fdad95e22b57a600c47265fb00f9193a653888fca3d2. Nov 6 23:39:32.808988 containerd[1471]: time="2025-11-06T23:39:32.808930786Z" level=info msg="StartContainer for \"eb8860149b1f8a9aa233fdad95e22b57a600c47265fb00f9193a653888fca3d2\" returns successfully" Nov 6 23:39:32.827129 systemd[1]: cri-containerd-eb8860149b1f8a9aa233fdad95e22b57a600c47265fb00f9193a653888fca3d2.scope: Deactivated successfully. Nov 6 23:39:32.861042 sshd[4534]: Connection closed by 139.178.89.65 port 34298 Nov 6 23:39:32.864541 sshd-session[4494]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:32.869933 systemd[1]: sshd@27-10.128.0.22:22-139.178.89.65:34298.service: Deactivated successfully. Nov 6 23:39:32.873304 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 23:39:32.877206 systemd-logind[1459]: Session 28 logged out. Waiting for processes to exit. Nov 6 23:39:32.880070 systemd-logind[1459]: Removed session 28. Nov 6 23:39:32.882098 containerd[1471]: time="2025-11-06T23:39:32.882027596Z" level=info msg="shim disconnected" id=eb8860149b1f8a9aa233fdad95e22b57a600c47265fb00f9193a653888fca3d2 namespace=k8s.io Nov 6 23:39:32.882416 containerd[1471]: time="2025-11-06T23:39:32.882388492Z" level=warning msg="cleaning up after shim disconnected" id=eb8860149b1f8a9aa233fdad95e22b57a600c47265fb00f9193a653888fca3d2 namespace=k8s.io Nov 6 23:39:32.882704 containerd[1471]: time="2025-11-06T23:39:32.882569911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:32.924028 systemd[1]: Started sshd@28-10.128.0.22:22-139.178.89.65:34304.service - OpenSSH per-connection server daemon (139.178.89.65:34304). Nov 6 23:39:33.041798 containerd[1471]: time="2025-11-06T23:39:33.041737373Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:39:33.068793 containerd[1471]: time="2025-11-06T23:39:33.068727580Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"743b6be25768ce63004226c4f31b219dfb4dd99e333d9c6c6a5c186a00523aab\"" Nov 6 23:39:33.070953 containerd[1471]: time="2025-11-06T23:39:33.070887741Z" level=info msg="StartContainer for \"743b6be25768ce63004226c4f31b219dfb4dd99e333d9c6c6a5c186a00523aab\"" Nov 6 23:39:33.112850 systemd[1]: Started cri-containerd-743b6be25768ce63004226c4f31b219dfb4dd99e333d9c6c6a5c186a00523aab.scope - libcontainer container 743b6be25768ce63004226c4f31b219dfb4dd99e333d9c6c6a5c186a00523aab. Nov 6 23:39:33.165772 containerd[1471]: time="2025-11-06T23:39:33.163604454Z" level=info msg="StartContainer for \"743b6be25768ce63004226c4f31b219dfb4dd99e333d9c6c6a5c186a00523aab\" returns successfully" Nov 6 23:39:33.172346 systemd[1]: cri-containerd-743b6be25768ce63004226c4f31b219dfb4dd99e333d9c6c6a5c186a00523aab.scope: Deactivated successfully. Nov 6 23:39:33.211322 containerd[1471]: time="2025-11-06T23:39:33.210870012Z" level=info msg="shim disconnected" id=743b6be25768ce63004226c4f31b219dfb4dd99e333d9c6c6a5c186a00523aab namespace=k8s.io Nov 6 23:39:33.211322 containerd[1471]: time="2025-11-06T23:39:33.210962696Z" level=warning msg="cleaning up after shim disconnected" id=743b6be25768ce63004226c4f31b219dfb4dd99e333d9c6c6a5c186a00523aab namespace=k8s.io Nov 6 23:39:33.211322 containerd[1471]: time="2025-11-06T23:39:33.210979487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:33.252891 sshd[4609]: Accepted publickey for core from 139.178.89.65 port 34304 ssh2: RSA SHA256:ithM/iDShBJWdJjWGHKb3evZWSs7UwybeJU/M8eH9js Nov 6 23:39:33.253837 sshd-session[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:39:33.262039 systemd-logind[1459]: New session 29 of user core. Nov 6 23:39:33.267709 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 6 23:39:34.049352 containerd[1471]: time="2025-11-06T23:39:34.049270272Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:39:34.081200 containerd[1471]: time="2025-11-06T23:39:34.081131619Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2\"" Nov 6 23:39:34.084695 containerd[1471]: time="2025-11-06T23:39:34.082349178Z" level=info msg="StartContainer for \"b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2\"" Nov 6 23:39:34.137696 systemd[1]: Started cri-containerd-b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2.scope - libcontainer container b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2. Nov 6 23:39:34.184228 containerd[1471]: time="2025-11-06T23:39:34.184127961Z" level=info msg="StartContainer for \"b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2\" returns successfully" Nov 6 23:39:34.190269 systemd[1]: cri-containerd-b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2.scope: Deactivated successfully. Nov 6 23:39:34.241501 containerd[1471]: time="2025-11-06T23:39:34.241376389Z" level=info msg="shim disconnected" id=b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2 namespace=k8s.io Nov 6 23:39:34.241501 containerd[1471]: time="2025-11-06T23:39:34.241485930Z" level=warning msg="cleaning up after shim disconnected" id=b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2 namespace=k8s.io Nov 6 23:39:34.241501 containerd[1471]: time="2025-11-06T23:39:34.241505006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:34.484133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6134cb2c9dab37cc7622c25124718f95646ec79d0a4ca219a682a8a0d8c65b2-rootfs.mount: Deactivated successfully. Nov 6 23:39:35.052998 containerd[1471]: time="2025-11-06T23:39:35.052943819Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:39:35.080272 containerd[1471]: time="2025-11-06T23:39:35.080186490Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa\"" Nov 6 23:39:35.081674 containerd[1471]: time="2025-11-06T23:39:35.081608089Z" level=info msg="StartContainer for \"f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa\"" Nov 6 23:39:35.132795 systemd[1]: Started cri-containerd-f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa.scope - libcontainer container f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa. Nov 6 23:39:35.177152 systemd[1]: cri-containerd-f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa.scope: Deactivated successfully. Nov 6 23:39:35.181855 containerd[1471]: time="2025-11-06T23:39:35.181126154Z" level=info msg="StartContainer for \"f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa\" returns successfully" Nov 6 23:39:35.215031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa-rootfs.mount: Deactivated successfully. Nov 6 23:39:35.217666 containerd[1471]: time="2025-11-06T23:39:35.217277186Z" level=info msg="shim disconnected" id=f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa namespace=k8s.io Nov 6 23:39:35.217666 containerd[1471]: time="2025-11-06T23:39:35.217350755Z" level=warning msg="cleaning up after shim disconnected" id=f38f396b9bd654df7240be326f102b7f330fc537f08da4166f7d69a01bf4deaa namespace=k8s.io Nov 6 23:39:35.217666 containerd[1471]: time="2025-11-06T23:39:35.217366569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:39:36.057957 containerd[1471]: time="2025-11-06T23:39:36.057721092Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:39:36.086192 containerd[1471]: time="2025-11-06T23:39:36.085894405Z" level=info msg="CreateContainer within sandbox \"06d1425503670017bd4806500995b1db1886965ad3c92c77461c852eb1181cc4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"03cfcddb89ce33a6582a7868ca062a0a9c715201ec3cac5f2508ec7a634a24e4\"" Nov 6 23:39:36.089952 containerd[1471]: time="2025-11-06T23:39:36.089897944Z" level=info msg="StartContainer for \"03cfcddb89ce33a6582a7868ca062a0a9c715201ec3cac5f2508ec7a634a24e4\"" Nov 6 23:39:36.141439 systemd[1]: run-containerd-runc-k8s.io-03cfcddb89ce33a6582a7868ca062a0a9c715201ec3cac5f2508ec7a634a24e4-runc.kBU3z1.mount: Deactivated successfully. Nov 6 23:39:36.154864 systemd[1]: Started cri-containerd-03cfcddb89ce33a6582a7868ca062a0a9c715201ec3cac5f2508ec7a634a24e4.scope - libcontainer container 03cfcddb89ce33a6582a7868ca062a0a9c715201ec3cac5f2508ec7a634a24e4. Nov 6 23:39:36.209962 containerd[1471]: time="2025-11-06T23:39:36.209882830Z" level=info msg="StartContainer for \"03cfcddb89ce33a6582a7868ca062a0a9c715201ec3cac5f2508ec7a634a24e4\" returns successfully" Nov 6 23:39:36.551414 containerd[1471]: time="2025-11-06T23:39:36.550897235Z" level=info msg="StopPodSandbox for \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\"" Nov 6 23:39:36.551414 containerd[1471]: time="2025-11-06T23:39:36.551065849Z" level=info msg="TearDown network for sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" successfully" Nov 6 23:39:36.551414 containerd[1471]: time="2025-11-06T23:39:36.551089237Z" level=info msg="StopPodSandbox for \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" returns successfully" Nov 6 23:39:36.552414 containerd[1471]: time="2025-11-06T23:39:36.552204118Z" level=info msg="RemovePodSandbox for \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\"" Nov 6 23:39:36.552414 containerd[1471]: time="2025-11-06T23:39:36.552248942Z" level=info msg="Forcibly stopping sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\"" Nov 6 23:39:36.553989 containerd[1471]: time="2025-11-06T23:39:36.553695298Z" level=info msg="TearDown network for sandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" successfully" Nov 6 23:39:36.571277 containerd[1471]: time="2025-11-06T23:39:36.570951904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 6 23:39:36.571277 containerd[1471]: time="2025-11-06T23:39:36.571057056Z" level=info msg="RemovePodSandbox \"df43cff728c05f901713b1f9a5aecba8b92a0c52dc13c280e853a343db4d8ece\" returns successfully" Nov 6 23:39:36.572410 containerd[1471]: time="2025-11-06T23:39:36.572106008Z" level=info msg="StopPodSandbox for \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\"" Nov 6 23:39:36.572410 containerd[1471]: time="2025-11-06T23:39:36.572239591Z" level=info msg="TearDown network for sandbox \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\" successfully" Nov 6 23:39:36.572410 containerd[1471]: time="2025-11-06T23:39:36.572315293Z" level=info msg="StopPodSandbox for \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\" returns successfully" Nov 6 23:39:36.573265 containerd[1471]: time="2025-11-06T23:39:36.573039986Z" level=info msg="RemovePodSandbox for \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\"" Nov 6 23:39:36.573265 containerd[1471]: time="2025-11-06T23:39:36.573076596Z" level=info msg="Forcibly stopping sandbox \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\"" Nov 6 23:39:36.573265 containerd[1471]: time="2025-11-06T23:39:36.573166251Z" level=info msg="TearDown network for sandbox \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\" successfully" Nov 6 23:39:36.580014 containerd[1471]: time="2025-11-06T23:39:36.579659028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 6 23:39:36.580014 containerd[1471]: time="2025-11-06T23:39:36.579751914Z" level=info msg="RemovePodSandbox \"26fdfa9dda96b7b3d7b6d07242b5e35c4fa7e717a3a59a6e7253f04c3beea176\" returns successfully" Nov 6 23:39:36.969563 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 6 23:39:37.093086 kubelet[2705]: I1106 23:39:37.091492 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-px727" podStartSLOduration=5.091449241 podStartE2EDuration="5.091449241s" podCreationTimestamp="2025-11-06 23:39:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:39:37.091154994 +0000 UTC m=+120.753302857" watchObservedRunningTime="2025-11-06 23:39:37.091449241 +0000 UTC m=+120.753597103" Nov 6 23:39:40.486732 systemd-networkd[1382]: lxc_health: Link UP Nov 6 23:39:40.488780 systemd-networkd[1382]: lxc_health: Gained carrier Nov 6 23:39:41.938717 systemd-networkd[1382]: lxc_health: Gained IPv6LL Nov 6 23:39:44.086538 ntpd[1441]: Listen normally on 14 lxc_health [fe80::8493:43ff:fef4:d965%14]:123 Nov 6 23:39:44.087255 ntpd[1441]: 6 Nov 23:39:44 ntpd[1441]: Listen normally on 14 lxc_health [fe80::8493:43ff:fef4:d965%14]:123 Nov 6 23:39:44.391942 systemd[1]: run-containerd-runc-k8s.io-03cfcddb89ce33a6582a7868ca062a0a9c715201ec3cac5f2508ec7a634a24e4-runc.SxUroU.mount: Deactivated successfully. Nov 6 23:39:46.765176 sshd[4668]: Connection closed by 139.178.89.65 port 34304 Nov 6 23:39:46.767774 sshd-session[4609]: pam_unix(sshd:session): session closed for user core Nov 6 23:39:46.777650 systemd-logind[1459]: Session 29 logged out. Waiting for processes to exit. Nov 6 23:39:46.778483 systemd[1]: sshd@28-10.128.0.22:22-139.178.89.65:34304.service: Deactivated successfully. Nov 6 23:39:46.785191 systemd[1]: session-29.scope: Deactivated successfully. Nov 6 23:39:46.790623 systemd-logind[1459]: Removed session 29.