Dec 13 01:16:03.158500 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:16:03.158545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:03.158564 kernel: BIOS-provided physical RAM map: Dec 13 01:16:03.158578 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:16:03.158592 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:16:03.158606 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:16:03.158623 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:16:03.158642 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:16:03.158656 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:16:03.158671 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:16:03.158686 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:16:03.158701 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:16:03.158715 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:16:03.158730 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:16:03.158753 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:16:03.158770 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:16:03.158786 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:16:03.158802 kernel: NX (Execute Disable) protection: active Dec 13 01:16:03.158818 kernel: APIC: Static calls initialized Dec 13 01:16:03.158834 kernel: efi: EFI v2.7 by EDK II Dec 13 01:16:03.158851 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:16:03.159205 kernel: SMBIOS 2.4 present. Dec 13 01:16:03.159223 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:16:03.159238 kernel: Hypervisor detected: KVM Dec 13 01:16:03.159261 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:16:03.159277 kernel: kvm-clock: using sched offset of 12740617750 cycles Dec 13 01:16:03.159294 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:16:03.159311 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:16:03.159328 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:16:03.159345 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:16:03.159361 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:16:03.159378 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:16:03.159395 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:16:03.159416 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:16:03.159432 kernel: Using GB pages for direct mapping Dec 13 01:16:03.159449 kernel: Secure boot disabled Dec 13 01:16:03.159465 kernel: ACPI: Early table checksum verification disabled Dec 13 01:16:03.159481 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:16:03.159498 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:16:03.159515 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:16:03.159539 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:16:03.159559 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:16:03.159577 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:16:03.159595 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:16:03.159613 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:16:03.159630 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:16:03.159648 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:16:03.159669 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:16:03.159687 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:16:03.159705 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:16:03.159723 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:16:03.159740 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:16:03.159758 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:16:03.159776 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:16:03.159793 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:16:03.159810 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:16:03.159832 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:16:03.159849 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:16:03.159881 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:16:03.159899 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:16:03.159917 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:16:03.159934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:16:03.159952 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:16:03.159970 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:16:03.159995 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 01:16:03.160017 kernel: Zone ranges: Dec 13 01:16:03.160035 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:16:03.160052 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:16:03.160070 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:16:03.160088 kernel: Movable zone start for each node Dec 13 01:16:03.160106 kernel: Early memory node ranges Dec 13 01:16:03.160123 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:16:03.160141 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:16:03.160159 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:16:03.160176 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:16:03.160197 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:16:03.160215 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:16:03.160232 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:16:03.160250 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:16:03.160267 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:16:03.160285 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:16:03.160302 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:16:03.160320 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:16:03.160337 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:16:03.160359 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:16:03.160376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:16:03.160394 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:16:03.160412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:16:03.160429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:16:03.160447 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:16:03.160465 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:16:03.160482 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:16:03.160500 kernel: Booting paravirtualized kernel on KVM Dec 13 01:16:03.160521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:16:03.160539 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:16:03.160557 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:16:03.160575 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:16:03.160592 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:16:03.160609 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:16:03.160627 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:16:03.160646 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:03.160668 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:16:03.160685 kernel: random: crng init done Dec 13 01:16:03.160702 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:16:03.160720 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:16:03.160738 kernel: Fallback order for Node 0: 0 Dec 13 01:16:03.160756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:16:03.160773 kernel: Policy zone: Normal Dec 13 01:16:03.160791 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:16:03.160808 kernel: software IO TLB: area num 2. Dec 13 01:16:03.160830 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346940K reserved, 0K cma-reserved) Dec 13 01:16:03.160848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:16:03.160878 kernel: Kernel/User page tables isolation: enabled Dec 13 01:16:03.160896 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:16:03.160913 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:16:03.160931 kernel: Dynamic Preempt: voluntary Dec 13 01:16:03.160948 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:16:03.160968 kernel: rcu: RCU event tracing is enabled. Dec 13 01:16:03.161009 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:16:03.161028 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:16:03.161047 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:16:03.161068 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:16:03.161088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:16:03.161107 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:16:03.161125 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:16:03.161144 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:16:03.161162 kernel: Console: colour dummy device 80x25 Dec 13 01:16:03.161207 kernel: printk: console [ttyS0] enabled Dec 13 01:16:03.161240 kernel: ACPI: Core revision 20230628 Dec 13 01:16:03.161277 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:16:03.161293 kernel: x2apic enabled Dec 13 01:16:03.161311 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:16:03.161329 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:16:03.161348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:16:03.161365 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:16:03.161386 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:16:03.161405 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:16:03.161424 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:16:03.161441 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:16:03.161456 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:16:03.161472 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:16:03.161489 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:16:03.161507 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:16:03.161523 kernel: RETBleed: Mitigation: IBRS Dec 13 01:16:03.161551 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:16:03.161574 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:16:03.161590 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:16:03.161606 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:16:03.161622 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:16:03.161640 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:16:03.161657 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:16:03.161673 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:16:03.161690 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:16:03.161713 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:16:03.161732 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:16:03.161750 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:16:03.161766 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:16:03.161783 kernel: landlock: Up and running. Dec 13 01:16:03.161803 kernel: SELinux: Initializing. Dec 13 01:16:03.161823 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:16:03.161843 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:16:03.161892 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:16:03.161918 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:03.161937 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:03.161957 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:03.162060 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:16:03.162093 kernel: signal: max sigframe size: 1776 Dec 13 01:16:03.162114 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:16:03.162141 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:16:03.162164 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:16:03.162184 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:16:03.162222 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:16:03.162240 kernel: .... node #0, CPUs: #1 Dec 13 01:16:03.162261 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:16:03.162282 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:16:03.162301 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:16:03.162320 kernel: smpboot: Max logical packages: 1 Dec 13 01:16:03.162340 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:16:03.162357 kernel: devtmpfs: initialized Dec 13 01:16:03.162378 kernel: x86/mm: Memory block size: 128MB Dec 13 01:16:03.162396 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:16:03.162415 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:16:03.162434 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:16:03.162453 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:16:03.162471 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:16:03.162498 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:16:03.162517 kernel: audit: type=2000 audit(1734052561.136:1): state=initialized audit_enabled=0 res=1 Dec 13 01:16:03.162535 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:16:03.162558 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:16:03.162576 kernel: cpuidle: using governor menu Dec 13 01:16:03.162594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:16:03.162612 kernel: dca service started, version 1.12.1 Dec 13 01:16:03.162630 kernel: PCI: Using configuration type 1 for base access Dec 13 01:16:03.162649 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:16:03.162667 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:16:03.162686 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:16:03.162704 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:16:03.162726 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:16:03.162744 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:16:03.162762 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:16:03.162781 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:16:03.162800 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:16:03.162818 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:16:03.162836 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:16:03.162913 kernel: ACPI: Interpreter enabled Dec 13 01:16:03.164769 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:16:03.164816 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:16:03.164837 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:16:03.164894 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:16:03.164925 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:16:03.164942 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:16:03.165203 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:16:03.165411 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:16:03.165583 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:16:03.165612 kernel: PCI host bridge to bus 0000:00 Dec 13 01:16:03.165785 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:16:03.165975 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:16:03.166132 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:16:03.166314 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:16:03.166493 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:16:03.166694 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:16:03.168286 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:16:03.168535 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:16:03.168725 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:16:03.169271 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:16:03.169469 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:16:03.169954 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:16:03.170317 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:16:03.170522 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:16:03.170718 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:16:03.170951 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:16:03.171143 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:16:03.171337 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:16:03.171368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:16:03.171389 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:16:03.171409 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:16:03.171428 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:16:03.171447 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:16:03.171467 kernel: iommu: Default domain type: Translated Dec 13 01:16:03.171487 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:16:03.171506 kernel: efivars: Registered efivars operations Dec 13 01:16:03.171526 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:16:03.171549 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:16:03.171566 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:16:03.171586 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:16:03.171604 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:16:03.171624 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:16:03.171642 kernel: vgaarb: loaded Dec 13 01:16:03.171662 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:16:03.171682 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:16:03.171702 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:16:03.171725 kernel: pnp: PnP ACPI init Dec 13 01:16:03.171745 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:16:03.171764 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:16:03.171783 kernel: NET: Registered PF_INET protocol family Dec 13 01:16:03.171803 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:16:03.171823 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:16:03.171843 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:16:03.171877 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:16:03.171895 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:16:03.171918 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:16:03.171936 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:16:03.171955 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:16:03.171973 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:16:03.171992 kernel: NET: Registered PF_XDP protocol family Dec 13 01:16:03.172185 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:16:03.172375 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:16:03.172549 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:16:03.172729 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:16:03.174721 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:16:03.174762 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:16:03.174783 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:16:03.174804 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:16:03.174824 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:16:03.174844 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:16:03.174904 kernel: clocksource: Switched to clocksource tsc Dec 13 01:16:03.174929 kernel: Initialise system trusted keyrings Dec 13 01:16:03.174947 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:16:03.174964 kernel: Key type asymmetric registered Dec 13 01:16:03.174981 kernel: Asymmetric key parser 'x509' registered Dec 13 01:16:03.174998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:16:03.175016 kernel: io scheduler mq-deadline registered Dec 13 01:16:03.175034 kernel: io scheduler kyber registered Dec 13 01:16:03.175051 kernel: io scheduler bfq registered Dec 13 01:16:03.175069 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:16:03.175093 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:16:03.175320 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:16:03.175349 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:16:03.175554 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:16:03.175580 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:16:03.175769 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:16:03.175792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:16:03.175811 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:16:03.175829 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:16:03.175943 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:16:03.175965 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:16:03.176184 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:16:03.176211 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:16:03.176238 kernel: i8042: Warning: Keylock active Dec 13 01:16:03.176258 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:16:03.176278 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:16:03.176463 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:16:03.176642 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:16:03.176809 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:16:02 UTC (1734052562) Dec 13 01:16:03.177261 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:16:03.177290 kernel: intel_pstate: CPU model not supported Dec 13 01:16:03.177308 kernel: pstore: Using crash dump compression: deflate Dec 13 01:16:03.177326 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:16:03.177479 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:16:03.177496 kernel: Segment Routing with IPv6 Dec 13 01:16:03.177521 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:16:03.177538 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:16:03.177557 kernel: Key type dns_resolver registered Dec 13 01:16:03.177574 kernel: IPI shorthand broadcast: enabled Dec 13 01:16:03.177717 kernel: sched_clock: Marking stable (957005196, 203079601)->(1330112298, -170027501) Dec 13 01:16:03.177735 kernel: registered taskstats version 1 Dec 13 01:16:03.177754 kernel: Loading compiled-in X.509 certificates Dec 13 01:16:03.177771 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:16:03.177789 kernel: Key type .fscrypt registered Dec 13 01:16:03.177811 kernel: Key type fscrypt-provisioning registered Dec 13 01:16:03.177830 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:16:03.178094 kernel: ima: No architecture policies found Dec 13 01:16:03.178117 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:16:03.178228 kernel: clk: Disabling unused clocks Dec 13 01:16:03.178247 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:16:03.178265 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:16:03.178285 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:16:03.178310 kernel: Run /init as init process Dec 13 01:16:03.178328 kernel: with arguments: Dec 13 01:16:03.178346 kernel: /init Dec 13 01:16:03.178362 kernel: with environment: Dec 13 01:16:03.178377 kernel: HOME=/ Dec 13 01:16:03.178395 kernel: TERM=linux Dec 13 01:16:03.178412 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:16:03.178435 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:03.178462 systemd[1]: Detected virtualization google. Dec 13 01:16:03.178481 systemd[1]: Detected architecture x86-64. Dec 13 01:16:03.178500 systemd[1]: Running in initrd. Dec 13 01:16:03.178517 systemd[1]: No hostname configured, using default hostname. Dec 13 01:16:03.178537 systemd[1]: Hostname set to . Dec 13 01:16:03.178555 systemd[1]: Initializing machine ID from random generator. Dec 13 01:16:03.178573 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:16:03.178593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:03.178616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:03.178636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:16:03.178656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:03.178675 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:16:03.178694 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:16:03.178714 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:16:03.178733 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:16:03.178756 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:03.178775 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:03.178815 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:03.178839 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:03.178890 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:03.178908 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:03.178929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:03.178948 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:03.178968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:03.178987 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:03.179007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:03.179028 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:03.179050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:03.179072 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:03.179093 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:16:03.179119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:03.179140 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:16:03.179160 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:16:03.179180 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:03.179201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:03.179229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:03.179250 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:03.179309 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:16:03.179371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:03.179393 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:16:03.179421 systemd-journald[183]: Journal started Dec 13 01:16:03.179462 systemd-journald[183]: Runtime Journal (/run/log/journal/6fe744ee66a443c3bcaa6d544ce69e40) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:16:03.161594 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:16:03.190990 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:03.199237 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:03.210156 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:03.219528 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:03.226003 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:16:03.226047 kernel: Bridge firewalling registered Dec 13 01:16:03.225167 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:16:03.230180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:03.235412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:03.240551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:03.253413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:03.266446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:03.273084 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:03.290210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:03.303248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:03.314264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:03.325378 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:03.339199 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:16:03.352317 systemd-resolved[211]: Positive Trust Anchors: Dec 13 01:16:03.352332 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:03.352394 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:03.357337 systemd-resolved[211]: Defaulting to hostname 'linux'. Dec 13 01:16:03.359187 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:03.384331 dracut-cmdline[219]: dracut-dracut-053 Dec 13 01:16:03.384331 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:03.380242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:03.474907 kernel: SCSI subsystem initialized Dec 13 01:16:03.485916 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:16:03.497910 kernel: iscsi: registered transport (tcp) Dec 13 01:16:03.521897 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:16:03.521976 kernel: QLogic iSCSI HBA Driver Dec 13 01:16:03.576260 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:03.587144 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:16:03.617156 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:16:03.617251 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:16:03.617281 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:16:03.665021 kernel: raid6: avx2x4 gen() 18068 MB/s Dec 13 01:16:03.681921 kernel: raid6: avx2x2 gen() 18171 MB/s Dec 13 01:16:03.699657 kernel: raid6: avx2x1 gen() 14302 MB/s Dec 13 01:16:03.699741 kernel: raid6: using algorithm avx2x2 gen() 18171 MB/s Dec 13 01:16:03.718228 kernel: raid6: .... xor() 17386 MB/s, rmw enabled Dec 13 01:16:03.718312 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:16:03.741912 kernel: xor: automatically using best checksumming function avx Dec 13 01:16:03.915932 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:16:03.928949 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:03.935105 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:03.970081 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 01:16:03.976956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:03.988229 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:16:04.019205 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 01:16:04.059156 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:04.069177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:04.162947 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:04.177100 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:16:04.213210 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:04.237878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:04.260024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:04.297026 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:16:04.297068 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:16:04.297093 kernel: AES CTR mode by8 optimization enabled Dec 13 01:16:04.277483 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:04.331109 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:16:04.413356 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:16:04.413802 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:16:04.391818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:04.392090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:04.435961 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:04.472317 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:16:04.537481 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:16:04.537741 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:16:04.538025 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:16:04.538415 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:16:04.538656 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:16:04.538686 kernel: GPT:17805311 != 25165823 Dec 13 01:16:04.538712 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:16:04.538737 kernel: GPT:17805311 != 25165823 Dec 13 01:16:04.538793 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:16:04.538818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:04.538843 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:16:04.449260 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:04.449593 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:04.547088 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:04.628036 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (448) Dec 13 01:16:04.628078 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (453) Dec 13 01:16:04.563372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:04.587624 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:04.649548 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:16:04.660422 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:16:04.680542 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:04.703471 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:16:04.722058 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:16:04.752809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:16:04.764116 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:16:04.795822 disk-uuid[543]: Primary Header is updated. Dec 13 01:16:04.795822 disk-uuid[543]: Secondary Entries is updated. Dec 13 01:16:04.795822 disk-uuid[543]: Secondary Header is updated. Dec 13 01:16:04.829139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:04.802119 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:04.845907 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:04.875893 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:04.898377 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:05.867954 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:05.868931 disk-uuid[544]: The operation has completed successfully. Dec 13 01:16:05.955298 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:16:05.955461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:16:05.988147 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:16:06.019015 sh[570]: Success Dec 13 01:16:06.044046 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:16:06.145407 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:16:06.151040 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:16:06.187813 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:16:06.228630 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:16:06.228777 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:06.228817 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:16:06.238087 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:16:06.245129 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:16:06.285964 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:16:06.295368 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:16:06.296588 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:16:06.302131 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:16:06.315126 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:16:06.377990 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:06.392388 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:06.392507 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:06.411584 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:06.411725 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:06.440014 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:06.429739 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:16:06.457252 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:16:06.485136 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:16:06.498186 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:06.543822 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:06.590705 systemd-networkd[752]: lo: Link UP Dec 13 01:16:06.590719 systemd-networkd[752]: lo: Gained carrier Dec 13 01:16:06.592645 systemd-networkd[752]: Enumeration completed Dec 13 01:16:06.592999 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:06.593455 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:06.593462 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:06.596069 systemd-networkd[752]: eth0: Link UP Dec 13 01:16:06.596076 systemd-networkd[752]: eth0: Gained carrier Dec 13 01:16:06.596093 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:06.609164 systemd[1]: Reached target network.target - Network. Dec 13 01:16:06.693529 ignition[741]: Ignition 2.19.0 Dec 13 01:16:06.613020 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.51/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:16:06.693540 ignition[741]: Stage: fetch-offline Dec 13 01:16:06.695521 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:06.693595 ignition[741]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:06.712247 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:16:06.693605 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:06.769170 unknown[762]: fetched base config from "system" Dec 13 01:16:06.693739 ignition[741]: parsed url from cmdline: "" Dec 13 01:16:06.769208 unknown[762]: fetched base config from "system" Dec 13 01:16:06.693746 ignition[741]: no config URL provided Dec 13 01:16:06.769216 unknown[762]: fetched user config from "gcp" Dec 13 01:16:06.693756 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:06.771852 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:16:06.693769 ignition[741]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:06.796393 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:16:06.693777 ignition[741]: failed to fetch config: resource requires networking Dec 13 01:16:06.851404 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:16:06.694175 ignition[741]: Ignition finished successfully Dec 13 01:16:06.872087 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:16:06.759337 ignition[762]: Ignition 2.19.0 Dec 13 01:16:06.911170 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:16:06.759347 ignition[762]: Stage: fetch Dec 13 01:16:06.929251 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:06.759559 ignition[762]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:06.948083 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:06.759577 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:06.967080 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:06.759732 ignition[762]: parsed url from cmdline: "" Dec 13 01:16:06.981090 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:06.759739 ignition[762]: no config URL provided Dec 13 01:16:06.998131 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:06.759749 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:07.018123 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:16:06.759764 ignition[762]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:06.759787 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:16:06.763493 ignition[762]: GET result: OK Dec 13 01:16:06.763576 ignition[762]: parsing config with SHA512: df780bea6bf01cb0b8e0f53fc85c393eef72a48de1057228768fed6d35370e7fe4a650f6329457faa2aef5e16cfa5c673d8e7bc174b7e8725d1fb327a1f26c9b Dec 13 01:16:06.769890 ignition[762]: fetch: fetch complete Dec 13 01:16:06.769899 ignition[762]: fetch: fetch passed Dec 13 01:16:06.769970 ignition[762]: Ignition finished successfully Dec 13 01:16:06.848606 ignition[769]: Ignition 2.19.0 Dec 13 01:16:06.848616 ignition[769]: Stage: kargs Dec 13 01:16:06.848825 ignition[769]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:06.848837 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:06.850061 ignition[769]: kargs: kargs passed Dec 13 01:16:06.850125 ignition[769]: Ignition finished successfully Dec 13 01:16:06.908659 ignition[775]: Ignition 2.19.0 Dec 13 01:16:06.908671 ignition[775]: Stage: disks Dec 13 01:16:06.908903 ignition[775]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:06.908920 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:06.909966 ignition[775]: disks: disks passed Dec 13 01:16:06.910028 ignition[775]: Ignition finished successfully Dec 13 01:16:07.091965 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:16:07.219694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:16:07.226004 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:16:07.376908 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:16:07.377769 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:16:07.387835 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:07.415222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:07.433026 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:16:07.451494 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:16:07.482828 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (791) Dec 13 01:16:07.482894 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:07.482934 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:07.451597 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:16:07.537181 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:07.537227 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:07.537244 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:07.451641 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:07.512134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:07.546252 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:16:07.571122 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:16:07.720115 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:16:07.732009 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:16:07.743064 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:16:07.754042 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:16:07.909498 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:07.915021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:16:07.934090 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:16:07.967927 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:07.974379 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:16:08.019289 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:16:08.029483 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:16:08.055212 ignition[904]: INFO : Ignition 2.19.0 Dec 13 01:16:08.055212 ignition[904]: INFO : Stage: mount Dec 13 01:16:08.055212 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:08.055212 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:08.055212 ignition[904]: INFO : mount: mount passed Dec 13 01:16:08.055212 ignition[904]: INFO : Ignition finished successfully Dec 13 01:16:08.031057 systemd-networkd[752]: eth0: Gained IPv6LL Dec 13 01:16:08.045103 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:16:08.384148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:08.431936 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Dec 13 01:16:08.451146 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:08.451238 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:08.451266 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:08.475970 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:08.476067 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:08.479324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:08.516661 ignition[934]: INFO : Ignition 2.19.0 Dec 13 01:16:08.516661 ignition[934]: INFO : Stage: files Dec 13 01:16:08.531024 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:08.531024 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:08.531024 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:16:08.531024 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:16:08.531024 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:16:08.531024 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:16:08.531024 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:16:08.531024 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:16:08.530136 unknown[934]: wrote ssh authorized keys file for user: core Dec 13 01:16:08.632154 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:08.632154 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:08.666394 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:16:08.922946 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:08.922946 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:08.956045 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:09.221976 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:16:09.404137 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:16:09.674206 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:16:10.030135 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:10.030135 ignition[934]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:10.069064 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:10.069064 ignition[934]: INFO : files: files passed Dec 13 01:16:10.069064 ignition[934]: INFO : Ignition finished successfully Dec 13 01:16:10.036213 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:16:10.055263 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:16:10.086241 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:16:10.127678 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:16:10.310054 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:10.310054 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:10.127834 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:16:10.378099 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:10.151561 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:10.176367 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:16:10.206078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:16:10.283934 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:16:10.284069 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:16:10.303036 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:16:10.320182 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:16:10.334336 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:16:10.340119 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:16:10.416342 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:10.442186 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:16:10.478010 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:10.490330 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:10.511323 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:16:10.532262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:16:10.532485 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:10.565386 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:16:10.586231 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:16:10.605324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:16:10.625214 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:10.645347 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:10.664237 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:16:10.685329 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:10.709370 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:16:10.728561 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:16:10.748308 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:16:10.768313 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:16:10.768553 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:10.798401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:10.821267 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:10.843282 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:16:10.843507 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:10.864190 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:16:10.864423 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:10.983189 ignition[986]: INFO : Ignition 2.19.0 Dec 13 01:16:10.983189 ignition[986]: INFO : Stage: umount Dec 13 01:16:10.983189 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:10.983189 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:10.983189 ignition[986]: INFO : umount: umount passed Dec 13 01:16:10.983189 ignition[986]: INFO : Ignition finished successfully Dec 13 01:16:10.889274 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:16:10.889521 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:10.910397 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:16:10.910624 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:16:10.937180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:16:10.992016 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:16:10.992295 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:11.017375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:16:11.058030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:16:11.058337 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:11.058785 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:16:11.058977 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:11.104122 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:16:11.105228 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:16:11.105342 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:16:11.109763 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:16:11.109900 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:16:11.138484 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:16:11.138613 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:16:11.148418 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:16:11.148482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:16:11.174392 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:16:11.174475 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:16:11.185319 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:16:11.185400 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:16:11.202335 systemd[1]: Stopped target network.target - Network. Dec 13 01:16:11.219234 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:16:11.219322 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:11.236337 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:16:11.254280 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:16:11.257988 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:11.282227 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:16:11.291286 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:16:11.307338 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:16:11.307401 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:11.323344 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:16:11.323428 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:11.339336 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:16:11.339427 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:16:11.356320 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:16:11.356398 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:11.373334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:16:11.373408 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:11.393589 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:16:11.397951 systemd-networkd[752]: eth0: DHCPv6 lease lost Dec 13 01:16:11.420277 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:16:11.437691 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:16:11.437827 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:16:11.465890 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:16:11.466143 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:16:11.473899 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:16:11.473955 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:11.495034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:16:11.506203 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:16:11.963036 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:16:11.506297 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:11.533295 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:16:11.533366 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:11.562235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:16:11.562320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:11.582231 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:16:11.582326 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:11.595495 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:11.624749 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:16:11.624989 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:11.639433 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:16:11.639503 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:11.668329 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:16:11.668394 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:11.695294 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:16:11.695383 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:11.721359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:16:11.721442 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:11.768130 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:11.768250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:11.801228 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:16:11.815033 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:16:11.815220 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:11.826107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:11.826216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:11.838668 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:16:11.838797 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:16:11.858460 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:16:11.858583 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:16:11.880837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:16:11.905623 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:16:11.923167 systemd[1]: Switching root. Dec 13 01:16:12.283014 systemd-journald[183]: Journal stopped Dec 13 01:16:03.158500 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:16:03.158545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:03.158564 kernel: BIOS-provided physical RAM map: Dec 13 01:16:03.158578 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:16:03.158592 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:16:03.158606 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:16:03.158623 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:16:03.158642 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:16:03.158656 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:16:03.158671 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:16:03.158686 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:16:03.158701 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:16:03.158715 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:16:03.158730 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:16:03.158753 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:16:03.158770 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:16:03.158786 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:16:03.158802 kernel: NX (Execute Disable) protection: active Dec 13 01:16:03.158818 kernel: APIC: Static calls initialized Dec 13 01:16:03.158834 kernel: efi: EFI v2.7 by EDK II Dec 13 01:16:03.158851 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:16:03.159205 kernel: SMBIOS 2.4 present. Dec 13 01:16:03.159223 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:16:03.159238 kernel: Hypervisor detected: KVM Dec 13 01:16:03.159261 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:16:03.159277 kernel: kvm-clock: using sched offset of 12740617750 cycles Dec 13 01:16:03.159294 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:16:03.159311 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:16:03.159328 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:16:03.159345 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:16:03.159361 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:16:03.159378 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:16:03.159395 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:16:03.159416 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:16:03.159432 kernel: Using GB pages for direct mapping Dec 13 01:16:03.159449 kernel: Secure boot disabled Dec 13 01:16:03.159465 kernel: ACPI: Early table checksum verification disabled Dec 13 01:16:03.159481 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:16:03.159498 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:16:03.159515 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:16:03.159539 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:16:03.159559 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:16:03.159577 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:16:03.159595 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:16:03.159613 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:16:03.159630 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:16:03.159648 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:16:03.159669 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:16:03.159687 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:16:03.159705 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:16:03.159723 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:16:03.159740 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:16:03.159758 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:16:03.159776 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:16:03.159793 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:16:03.159810 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:16:03.159832 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:16:03.159849 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:16:03.159881 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:16:03.159899 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:16:03.159917 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:16:03.159934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:16:03.159952 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:16:03.159970 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:16:03.159995 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 01:16:03.160017 kernel: Zone ranges: Dec 13 01:16:03.160035 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:16:03.160052 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:16:03.160070 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:16:03.160088 kernel: Movable zone start for each node Dec 13 01:16:03.160106 kernel: Early memory node ranges Dec 13 01:16:03.160123 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:16:03.160141 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:16:03.160159 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:16:03.160176 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:16:03.160197 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:16:03.160215 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:16:03.160232 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:16:03.160250 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:16:03.160267 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:16:03.160285 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:16:03.160302 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:16:03.160320 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:16:03.160337 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:16:03.160359 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:16:03.160376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:16:03.160394 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:16:03.160412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:16:03.160429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:16:03.160447 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:16:03.160465 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:16:03.160482 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:16:03.160500 kernel: Booting paravirtualized kernel on KVM Dec 13 01:16:03.160521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:16:03.160539 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:16:03.160557 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:16:03.160575 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:16:03.160592 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:16:03.160609 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:16:03.160627 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:16:03.160646 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:03.160668 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:16:03.160685 kernel: random: crng init done Dec 13 01:16:03.160702 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:16:03.160720 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:16:03.160738 kernel: Fallback order for Node 0: 0 Dec 13 01:16:03.160756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:16:03.160773 kernel: Policy zone: Normal Dec 13 01:16:03.160791 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:16:03.160808 kernel: software IO TLB: area num 2. Dec 13 01:16:03.160830 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346940K reserved, 0K cma-reserved) Dec 13 01:16:03.160848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:16:03.160878 kernel: Kernel/User page tables isolation: enabled Dec 13 01:16:03.160896 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:16:03.160913 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:16:03.160931 kernel: Dynamic Preempt: voluntary Dec 13 01:16:03.160948 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:16:03.160968 kernel: rcu: RCU event tracing is enabled. Dec 13 01:16:03.161009 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:16:03.161028 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:16:03.161047 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:16:03.161068 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:16:03.161088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:16:03.161107 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:16:03.161125 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:16:03.161144 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:16:03.161162 kernel: Console: colour dummy device 80x25 Dec 13 01:16:03.161207 kernel: printk: console [ttyS0] enabled Dec 13 01:16:03.161240 kernel: ACPI: Core revision 20230628 Dec 13 01:16:03.161277 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:16:03.161293 kernel: x2apic enabled Dec 13 01:16:03.161311 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:16:03.161329 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:16:03.161348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:16:03.161365 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:16:03.161386 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:16:03.161405 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:16:03.161424 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:16:03.161441 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:16:03.161456 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:16:03.161472 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:16:03.161489 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:16:03.161507 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:16:03.161523 kernel: RETBleed: Mitigation: IBRS Dec 13 01:16:03.161551 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:16:03.161574 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:16:03.161590 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:16:03.161606 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:16:03.161622 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:16:03.161640 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:16:03.161657 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:16:03.161673 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:16:03.161690 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:16:03.161713 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:16:03.161732 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:16:03.161750 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:16:03.161766 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:16:03.161783 kernel: landlock: Up and running. Dec 13 01:16:03.161803 kernel: SELinux: Initializing. Dec 13 01:16:03.161823 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:16:03.161843 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:16:03.161892 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:16:03.161918 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:03.161937 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:03.161957 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:16:03.162060 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:16:03.162093 kernel: signal: max sigframe size: 1776 Dec 13 01:16:03.162114 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:16:03.162141 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:16:03.162164 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:16:03.162184 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:16:03.162222 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:16:03.162240 kernel: .... node #0, CPUs: #1 Dec 13 01:16:03.162261 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:16:03.162282 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:16:03.162301 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:16:03.162320 kernel: smpboot: Max logical packages: 1 Dec 13 01:16:03.162340 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:16:03.162357 kernel: devtmpfs: initialized Dec 13 01:16:03.162378 kernel: x86/mm: Memory block size: 128MB Dec 13 01:16:03.162396 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:16:03.162415 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:16:03.162434 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:16:03.162453 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:16:03.162471 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:16:03.162498 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:16:03.162517 kernel: audit: type=2000 audit(1734052561.136:1): state=initialized audit_enabled=0 res=1 Dec 13 01:16:03.162535 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:16:03.162558 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:16:03.162576 kernel: cpuidle: using governor menu Dec 13 01:16:03.162594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:16:03.162612 kernel: dca service started, version 1.12.1 Dec 13 01:16:03.162630 kernel: PCI: Using configuration type 1 for base access Dec 13 01:16:03.162649 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:16:03.162667 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:16:03.162686 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:16:03.162704 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:16:03.162726 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:16:03.162744 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:16:03.162762 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:16:03.162781 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:16:03.162800 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:16:03.162818 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:16:03.162836 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:16:03.162913 kernel: ACPI: Interpreter enabled Dec 13 01:16:03.164769 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:16:03.164816 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:16:03.164837 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:16:03.164894 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:16:03.164925 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:16:03.164942 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:16:03.165203 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:16:03.165411 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:16:03.165583 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:16:03.165612 kernel: PCI host bridge to bus 0000:00 Dec 13 01:16:03.165785 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:16:03.165975 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:16:03.166132 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:16:03.166314 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:16:03.166493 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:16:03.166694 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:16:03.168286 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:16:03.168535 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:16:03.168725 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:16:03.169271 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:16:03.169469 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:16:03.169954 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:16:03.170317 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:16:03.170522 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:16:03.170718 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:16:03.170951 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:16:03.171143 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:16:03.171337 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:16:03.171368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:16:03.171389 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:16:03.171409 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:16:03.171428 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:16:03.171447 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:16:03.171467 kernel: iommu: Default domain type: Translated Dec 13 01:16:03.171487 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:16:03.171506 kernel: efivars: Registered efivars operations Dec 13 01:16:03.171526 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:16:03.171549 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:16:03.171566 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:16:03.171586 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:16:03.171604 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:16:03.171624 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:16:03.171642 kernel: vgaarb: loaded Dec 13 01:16:03.171662 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:16:03.171682 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:16:03.171702 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:16:03.171725 kernel: pnp: PnP ACPI init Dec 13 01:16:03.171745 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:16:03.171764 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:16:03.171783 kernel: NET: Registered PF_INET protocol family Dec 13 01:16:03.171803 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:16:03.171823 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:16:03.171843 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:16:03.171877 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:16:03.171895 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:16:03.171918 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:16:03.171936 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:16:03.171955 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:16:03.171973 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:16:03.171992 kernel: NET: Registered PF_XDP protocol family Dec 13 01:16:03.172185 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:16:03.172375 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:16:03.172549 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:16:03.172729 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:16:03.174721 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:16:03.174762 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:16:03.174783 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:16:03.174804 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:16:03.174824 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:16:03.174844 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:16:03.174904 kernel: clocksource: Switched to clocksource tsc Dec 13 01:16:03.174929 kernel: Initialise system trusted keyrings Dec 13 01:16:03.174947 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:16:03.174964 kernel: Key type asymmetric registered Dec 13 01:16:03.174981 kernel: Asymmetric key parser 'x509' registered Dec 13 01:16:03.174998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:16:03.175016 kernel: io scheduler mq-deadline registered Dec 13 01:16:03.175034 kernel: io scheduler kyber registered Dec 13 01:16:03.175051 kernel: io scheduler bfq registered Dec 13 01:16:03.175069 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:16:03.175093 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:16:03.175320 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:16:03.175349 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:16:03.175554 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:16:03.175580 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:16:03.175769 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:16:03.175792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:16:03.175811 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:16:03.175829 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:16:03.175943 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:16:03.175965 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:16:03.176184 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:16:03.176211 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:16:03.176238 kernel: i8042: Warning: Keylock active Dec 13 01:16:03.176258 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:16:03.176278 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:16:03.176463 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:16:03.176642 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:16:03.176809 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:16:02 UTC (1734052562) Dec 13 01:16:03.177261 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:16:03.177290 kernel: intel_pstate: CPU model not supported Dec 13 01:16:03.177308 kernel: pstore: Using crash dump compression: deflate Dec 13 01:16:03.177326 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:16:03.177479 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:16:03.177496 kernel: Segment Routing with IPv6 Dec 13 01:16:03.177521 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:16:03.177538 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:16:03.177557 kernel: Key type dns_resolver registered Dec 13 01:16:03.177574 kernel: IPI shorthand broadcast: enabled Dec 13 01:16:03.177717 kernel: sched_clock: Marking stable (957005196, 203079601)->(1330112298, -170027501) Dec 13 01:16:03.177735 kernel: registered taskstats version 1 Dec 13 01:16:03.177754 kernel: Loading compiled-in X.509 certificates Dec 13 01:16:03.177771 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:16:03.177789 kernel: Key type .fscrypt registered Dec 13 01:16:03.177811 kernel: Key type fscrypt-provisioning registered Dec 13 01:16:03.177830 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:16:03.178094 kernel: ima: No architecture policies found Dec 13 01:16:03.178117 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:16:03.178228 kernel: clk: Disabling unused clocks Dec 13 01:16:03.178247 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:16:03.178265 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:16:03.178285 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:16:03.178310 kernel: Run /init as init process Dec 13 01:16:03.178328 kernel: with arguments: Dec 13 01:16:03.178346 kernel: /init Dec 13 01:16:03.178362 kernel: with environment: Dec 13 01:16:03.178377 kernel: HOME=/ Dec 13 01:16:03.178395 kernel: TERM=linux Dec 13 01:16:03.178412 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:16:03.178435 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:03.178462 systemd[1]: Detected virtualization google. Dec 13 01:16:03.178481 systemd[1]: Detected architecture x86-64. Dec 13 01:16:03.178500 systemd[1]: Running in initrd. Dec 13 01:16:03.178517 systemd[1]: No hostname configured, using default hostname. Dec 13 01:16:03.178537 systemd[1]: Hostname set to . Dec 13 01:16:03.178555 systemd[1]: Initializing machine ID from random generator. Dec 13 01:16:03.178573 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:16:03.178593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:03.178616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:03.178636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:16:03.178656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:03.178675 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:16:03.178694 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:16:03.178714 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:16:03.178733 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:16:03.178756 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:03.178775 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:03.178815 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:03.178839 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:03.178890 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:03.178908 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:03.178929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:03.178948 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:03.178968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:16:03.178987 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:16:03.179007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:03.179028 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:03.179050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:03.179072 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:03.179093 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:16:03.179119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:03.179140 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:16:03.179160 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:16:03.179180 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:03.179201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:03.179229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:03.179250 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:03.179309 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:16:03.179371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:03.179393 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:16:03.179421 systemd-journald[183]: Journal started Dec 13 01:16:03.179462 systemd-journald[183]: Runtime Journal (/run/log/journal/6fe744ee66a443c3bcaa6d544ce69e40) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:16:03.161594 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:16:03.190990 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:03.199237 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:16:03.210156 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:03.219528 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:16:03.226003 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:16:03.226047 kernel: Bridge firewalling registered Dec 13 01:16:03.225167 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:16:03.230180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:03.235412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:03.240551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:03.253413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:03.266446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:03.273084 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:03.290210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:03.303248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:03.314264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:03.325378 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:03.339199 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:16:03.352317 systemd-resolved[211]: Positive Trust Anchors: Dec 13 01:16:03.352332 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:03.352394 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:03.357337 systemd-resolved[211]: Defaulting to hostname 'linux'. Dec 13 01:16:03.359187 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:03.384331 dracut-cmdline[219]: dracut-dracut-053 Dec 13 01:16:03.384331 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:16:03.380242 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:03.474907 kernel: SCSI subsystem initialized Dec 13 01:16:03.485916 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:16:03.497910 kernel: iscsi: registered transport (tcp) Dec 13 01:16:03.521897 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:16:03.521976 kernel: QLogic iSCSI HBA Driver Dec 13 01:16:03.576260 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:03.587144 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:16:03.617156 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:16:03.617251 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:16:03.617281 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:16:03.665021 kernel: raid6: avx2x4 gen() 18068 MB/s Dec 13 01:16:03.681921 kernel: raid6: avx2x2 gen() 18171 MB/s Dec 13 01:16:03.699657 kernel: raid6: avx2x1 gen() 14302 MB/s Dec 13 01:16:03.699741 kernel: raid6: using algorithm avx2x2 gen() 18171 MB/s Dec 13 01:16:03.718228 kernel: raid6: .... xor() 17386 MB/s, rmw enabled Dec 13 01:16:03.718312 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:16:03.741912 kernel: xor: automatically using best checksumming function avx Dec 13 01:16:03.915932 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:16:03.928949 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:03.935105 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:03.970081 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 01:16:03.976956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:03.988229 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:16:04.019205 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 01:16:04.059156 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:04.069177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:04.162947 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:04.177100 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:16:04.213210 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:04.237878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:04.260024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:04.297026 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:16:04.297068 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:16:04.297093 kernel: AES CTR mode by8 optimization enabled Dec 13 01:16:04.277483 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:04.331109 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:16:04.413356 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:16:04.413802 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:16:04.391818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:04.392090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:04.435961 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:04.472317 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:16:04.537481 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:16:04.537741 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:16:04.538025 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:16:04.538415 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:16:04.538656 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:16:04.538686 kernel: GPT:17805311 != 25165823 Dec 13 01:16:04.538712 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:16:04.538737 kernel: GPT:17805311 != 25165823 Dec 13 01:16:04.538793 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:16:04.538818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:04.538843 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:16:04.449260 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:04.449593 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:04.547088 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:04.628036 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (448) Dec 13 01:16:04.628078 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (453) Dec 13 01:16:04.563372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:04.587624 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:04.649548 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:16:04.660422 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:16:04.680542 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:04.703471 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:16:04.722058 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:16:04.752809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:16:04.764116 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:16:04.795822 disk-uuid[543]: Primary Header is updated. Dec 13 01:16:04.795822 disk-uuid[543]: Secondary Entries is updated. Dec 13 01:16:04.795822 disk-uuid[543]: Secondary Header is updated. Dec 13 01:16:04.829139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:04.802119 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:16:04.845907 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:04.875893 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:04.898377 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:05.867954 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:16:05.868931 disk-uuid[544]: The operation has completed successfully. Dec 13 01:16:05.955298 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:16:05.955461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:16:05.988147 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:16:06.019015 sh[570]: Success Dec 13 01:16:06.044046 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:16:06.145407 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:16:06.151040 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:16:06.187813 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:16:06.228630 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:16:06.228777 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:06.228817 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:16:06.238087 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:16:06.245129 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:16:06.285964 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:16:06.295368 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:16:06.296588 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:16:06.302131 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:16:06.315126 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:16:06.377990 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:06.392388 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:06.392507 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:06.411584 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:06.411725 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:06.440014 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:06.429739 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:16:06.457252 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:16:06.485136 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:16:06.498186 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:06.543822 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:06.590705 systemd-networkd[752]: lo: Link UP Dec 13 01:16:06.590719 systemd-networkd[752]: lo: Gained carrier Dec 13 01:16:06.592645 systemd-networkd[752]: Enumeration completed Dec 13 01:16:06.592999 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:06.593455 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:06.593462 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:06.596069 systemd-networkd[752]: eth0: Link UP Dec 13 01:16:06.596076 systemd-networkd[752]: eth0: Gained carrier Dec 13 01:16:06.596093 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:06.609164 systemd[1]: Reached target network.target - Network. Dec 13 01:16:06.693529 ignition[741]: Ignition 2.19.0 Dec 13 01:16:06.613020 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.51/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:16:06.693540 ignition[741]: Stage: fetch-offline Dec 13 01:16:06.695521 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:06.693595 ignition[741]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:06.712247 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:16:06.693605 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:06.769170 unknown[762]: fetched base config from "system" Dec 13 01:16:06.693739 ignition[741]: parsed url from cmdline: "" Dec 13 01:16:06.769208 unknown[762]: fetched base config from "system" Dec 13 01:16:06.693746 ignition[741]: no config URL provided Dec 13 01:16:06.769216 unknown[762]: fetched user config from "gcp" Dec 13 01:16:06.693756 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:06.771852 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:16:06.693769 ignition[741]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:06.796393 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:16:06.693777 ignition[741]: failed to fetch config: resource requires networking Dec 13 01:16:06.851404 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:16:06.694175 ignition[741]: Ignition finished successfully Dec 13 01:16:06.872087 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:16:06.759337 ignition[762]: Ignition 2.19.0 Dec 13 01:16:06.911170 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:16:06.759347 ignition[762]: Stage: fetch Dec 13 01:16:06.929251 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:06.759559 ignition[762]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:06.948083 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:16:06.759577 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:06.967080 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:06.759732 ignition[762]: parsed url from cmdline: "" Dec 13 01:16:06.981090 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:06.759739 ignition[762]: no config URL provided Dec 13 01:16:06.998131 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:06.759749 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:16:07.018123 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:16:06.759764 ignition[762]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:16:06.759787 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:16:06.763493 ignition[762]: GET result: OK Dec 13 01:16:06.763576 ignition[762]: parsing config with SHA512: df780bea6bf01cb0b8e0f53fc85c393eef72a48de1057228768fed6d35370e7fe4a650f6329457faa2aef5e16cfa5c673d8e7bc174b7e8725d1fb327a1f26c9b Dec 13 01:16:06.769890 ignition[762]: fetch: fetch complete Dec 13 01:16:06.769899 ignition[762]: fetch: fetch passed Dec 13 01:16:06.769970 ignition[762]: Ignition finished successfully Dec 13 01:16:06.848606 ignition[769]: Ignition 2.19.0 Dec 13 01:16:06.848616 ignition[769]: Stage: kargs Dec 13 01:16:06.848825 ignition[769]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:06.848837 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:06.850061 ignition[769]: kargs: kargs passed Dec 13 01:16:06.850125 ignition[769]: Ignition finished successfully Dec 13 01:16:06.908659 ignition[775]: Ignition 2.19.0 Dec 13 01:16:06.908671 ignition[775]: Stage: disks Dec 13 01:16:06.908903 ignition[775]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:06.908920 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:06.909966 ignition[775]: disks: disks passed Dec 13 01:16:06.910028 ignition[775]: Ignition finished successfully Dec 13 01:16:07.091965 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:16:07.219694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:16:07.226004 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:16:07.376908 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:16:07.377769 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:16:07.387835 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:07.415222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:07.433026 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:16:07.451494 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:16:07.482828 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (791) Dec 13 01:16:07.482894 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:07.482934 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:07.451597 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:16:07.537181 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:07.537227 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:07.537244 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:07.451641 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:07.512134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:07.546252 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:16:07.571122 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:16:07.720115 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:16:07.732009 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:16:07.743064 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:16:07.754042 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:16:07.909498 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:07.915021 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:16:07.934090 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:16:07.967927 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:07.974379 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:16:08.019289 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:16:08.029483 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:16:08.055212 ignition[904]: INFO : Ignition 2.19.0 Dec 13 01:16:08.055212 ignition[904]: INFO : Stage: mount Dec 13 01:16:08.055212 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:08.055212 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:08.055212 ignition[904]: INFO : mount: mount passed Dec 13 01:16:08.055212 ignition[904]: INFO : Ignition finished successfully Dec 13 01:16:08.031057 systemd-networkd[752]: eth0: Gained IPv6LL Dec 13 01:16:08.045103 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:16:08.384148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:16:08.431936 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Dec 13 01:16:08.451146 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:16:08.451238 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:16:08.451266 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:16:08.475970 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:16:08.476067 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:16:08.479324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:16:08.516661 ignition[934]: INFO : Ignition 2.19.0 Dec 13 01:16:08.516661 ignition[934]: INFO : Stage: files Dec 13 01:16:08.531024 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:08.531024 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:08.531024 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:16:08.531024 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:16:08.531024 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:16:08.531024 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:16:08.531024 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:16:08.531024 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:16:08.530136 unknown[934]: wrote ssh authorized keys file for user: core Dec 13 01:16:08.632154 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:08.632154 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:08.666394 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:16:08.922946 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:16:08.922946 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:08.956045 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:16:09.221976 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:16:09.404137 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:09.423035 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:16:09.674206 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:16:10.030135 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:16:10.030135 ignition[934]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:16:10.069064 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:10.069064 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:16:10.069064 ignition[934]: INFO : files: files passed Dec 13 01:16:10.069064 ignition[934]: INFO : Ignition finished successfully Dec 13 01:16:10.036213 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:16:10.055263 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:16:10.086241 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:16:10.127678 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:16:10.310054 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:10.310054 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:10.127834 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:16:10.378099 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:16:10.151561 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:10.176367 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:16:10.206078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:16:10.283934 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:16:10.284069 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:16:10.303036 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:16:10.320182 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:16:10.334336 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:16:10.340119 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:16:10.416342 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:10.442186 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:16:10.478010 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:10.490330 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:10.511323 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:16:10.532262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:16:10.532485 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:16:10.565386 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:16:10.586231 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:16:10.605324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:16:10.625214 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:16:10.645347 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:16:10.664237 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:16:10.685329 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:16:10.709370 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:16:10.728561 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:16:10.748308 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:16:10.768313 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:16:10.768553 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:16:10.798401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:10.821267 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:10.843282 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:16:10.843507 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:10.864190 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:16:10.864423 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:16:10.983189 ignition[986]: INFO : Ignition 2.19.0 Dec 13 01:16:10.983189 ignition[986]: INFO : Stage: umount Dec 13 01:16:10.983189 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:16:10.983189 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:16:10.983189 ignition[986]: INFO : umount: umount passed Dec 13 01:16:10.983189 ignition[986]: INFO : Ignition finished successfully Dec 13 01:16:10.889274 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:16:10.889521 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:16:10.910397 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:16:10.910624 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:16:10.937180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:16:10.992016 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:16:10.992295 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:11.017375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:16:11.058030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:16:11.058337 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:11.058785 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:16:11.058977 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:16:11.104122 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:16:11.105228 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:16:11.105342 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:16:11.109763 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:16:11.109900 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:16:11.138484 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:16:11.138613 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:16:11.148418 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:16:11.148482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:16:11.174392 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:16:11.174475 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:16:11.185319 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:16:11.185400 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:16:11.202335 systemd[1]: Stopped target network.target - Network. Dec 13 01:16:11.219234 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:16:11.219322 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:16:11.236337 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:16:11.254280 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:16:11.257988 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:11.282227 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:16:11.291286 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:16:11.307338 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:16:11.307401 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:16:11.323344 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:16:11.323428 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:16:11.339336 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:16:11.339427 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:16:11.356320 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:16:11.356398 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:16:11.373334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:16:11.373408 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:16:11.393589 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:16:11.397951 systemd-networkd[752]: eth0: DHCPv6 lease lost Dec 13 01:16:11.420277 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:16:11.437691 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:16:11.437827 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:16:11.465890 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:16:11.466143 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:16:11.473899 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:16:11.473955 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:11.495034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:16:11.506203 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:16:11.963036 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:16:11.506297 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:16:11.533295 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:16:11.533366 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:11.562235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:16:11.562320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:11.582231 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:16:11.582326 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:11.595495 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:11.624749 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:16:11.624989 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:11.639433 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:16:11.639503 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:11.668329 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:16:11.668394 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:11.695294 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:16:11.695383 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:16:11.721359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:16:11.721442 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:16:11.768130 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:16:11.768250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:16:11.801228 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:16:11.815033 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:16:11.815220 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:11.826107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:16:11.826216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:11.838668 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:16:11.838797 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:16:11.858460 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:16:11.858583 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:16:11.880837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:16:11.905623 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:16:11.923167 systemd[1]: Switching root. Dec 13 01:16:12.283014 systemd-journald[183]: Journal stopped Dec 13 01:16:14.883664 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:16:14.883726 kernel: SELinux: policy capability open_perms=1 Dec 13 01:16:14.883750 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:16:14.883769 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:16:14.883787 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:16:14.883817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:16:14.883840 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:16:14.883879 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:16:14.883908 kernel: audit: type=1403 audit(1734052572.665:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:16:14.883931 systemd[1]: Successfully loaded SELinux policy in 92.003ms. Dec 13 01:16:14.883955 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.712ms. Dec 13 01:16:14.883978 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:16:14.883999 systemd[1]: Detected virtualization google. Dec 13 01:16:14.884018 systemd[1]: Detected architecture x86-64. Dec 13 01:16:14.884043 systemd[1]: Detected first boot. Dec 13 01:16:14.884063 systemd[1]: Initializing machine ID from random generator. Dec 13 01:16:14.884081 zram_generator::config[1027]: No configuration found. Dec 13 01:16:14.884103 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:16:14.884125 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:16:14.884150 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:16:14.884172 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:16:14.884192 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:16:14.884212 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:16:14.884232 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:16:14.884252 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:16:14.884274 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:16:14.884301 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:16:14.884320 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:16:14.884339 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:16:14.884359 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:16:14.884379 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:16:14.884400 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:16:14.884421 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:16:14.884443 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:16:14.884469 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:16:14.884490 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:16:14.884511 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:16:14.884533 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:16:14.884554 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:16:14.884576 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:16:14.884603 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:16:14.884624 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:16:14.884646 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:16:14.884671 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:16:14.884691 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:16:14.884712 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:16:14.884734 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:16:14.884755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:16:14.884779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:16:14.884803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:16:14.884831 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:16:14.884873 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:16:14.884906 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:16:14.884926 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:16:14.884949 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:14.884977 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:16:14.884999 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:16:14.885023 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:16:14.885047 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:16:14.885072 systemd[1]: Reached target machines.target - Containers. Dec 13 01:16:14.885096 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:16:14.885121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:14.885147 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:16:14.885175 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:16:14.885197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:14.885219 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:14.885243 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:14.885266 kernel: ACPI: bus type drm_connector registered Dec 13 01:16:14.885288 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:16:14.885311 kernel: fuse: init (API version 7.39) Dec 13 01:16:14.885331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:14.885359 kernel: loop: module loaded Dec 13 01:16:14.885383 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:16:14.885408 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:16:14.885431 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:16:14.885456 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:16:14.885480 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:16:14.885503 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:16:14.885527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:16:14.885550 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:16:14.885619 systemd-journald[1114]: Collecting audit messages is disabled. Dec 13 01:16:14.885667 systemd-journald[1114]: Journal started Dec 13 01:16:14.885716 systemd-journald[1114]: Runtime Journal (/run/log/journal/38cb4ce3309348258c6d79fb6c668e28) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:16:13.614762 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:16:13.638236 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:16:13.638814 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:16:14.902023 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:16:14.939901 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:16:14.956900 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:16:14.956993 systemd[1]: Stopped verity-setup.service. Dec 13 01:16:14.990015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:14.997902 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:16:15.010571 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:16:15.021278 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:16:15.032246 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:16:15.043238 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:16:15.054247 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:16:15.065231 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:16:15.075398 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:16:15.087413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:16:15.099445 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:16:15.099709 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:16:15.111451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:15.111701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:15.124385 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:15.124623 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:15.135396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:15.135625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:15.147396 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:16:15.147625 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:16:15.158388 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:15.158616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:15.169411 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:16:15.179464 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:16:15.191450 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:16:15.203433 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:16:15.229172 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:16:15.246026 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:16:15.277048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:16:15.287125 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:16:15.287219 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:16:15.299171 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:16:15.327246 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:16:15.348197 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:16:15.358248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:15.370212 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:16:15.395367 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:16:15.407077 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:15.417542 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:16:15.417711 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:15.423032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:16:15.449003 systemd-journald[1114]: Time spent on flushing to /var/log/journal/38cb4ce3309348258c6d79fb6c668e28 is 63.352ms for 932 entries. Dec 13 01:16:15.449003 systemd-journald[1114]: System Journal (/var/log/journal/38cb4ce3309348258c6d79fb6c668e28) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:16:15.570956 systemd-journald[1114]: Received client request to flush runtime journal. Dec 13 01:16:15.571057 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 01:16:15.464157 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:16:15.482207 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:16:15.499155 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:16:15.521271 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:16:15.533187 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:16:15.544401 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:16:15.557690 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:16:15.570579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:16:15.581791 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:16:15.609876 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:16:15.617900 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:16:15.634766 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:16:15.646457 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:16:15.666881 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 01:16:15.675826 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:16:15.689713 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:16:15.712123 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:16:15.714717 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:16:15.752624 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Dec 13 01:16:15.752660 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Dec 13 01:16:15.764756 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:16:15.775951 kernel: loop2: detected capacity change from 0 to 54824 Dec 13 01:16:15.890009 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:16:15.995316 kernel: loop4: detected capacity change from 0 to 210664 Dec 13 01:16:16.040309 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 01:16:16.103495 kernel: loop6: detected capacity change from 0 to 54824 Dec 13 01:16:16.151954 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 01:16:16.212830 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 13 01:16:16.213753 (sd-merge)[1169]: Merged extensions into '/usr'. Dec 13 01:16:16.221506 systemd[1]: Reloading requested from client PID 1145 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:16:16.221533 systemd[1]: Reloading... Dec 13 01:16:16.364903 zram_generator::config[1191]: No configuration found. Dec 13 01:16:16.602894 ldconfig[1140]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:16:16.666758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:16.769740 systemd[1]: Reloading finished in 547 ms. Dec 13 01:16:16.814112 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:16:16.824845 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:16:16.836484 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:16:16.858115 systemd[1]: Starting ensure-sysext.service... Dec 13 01:16:16.872954 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:16:16.894136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:16:16.911950 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:16:16.911973 systemd[1]: Reloading... Dec 13 01:16:16.916077 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:16:16.916764 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:16:16.918614 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:16:16.919241 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:16:16.919372 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:16:16.925511 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:16.925655 systemd-tmpfiles[1237]: Skipping /boot Dec 13 01:16:16.945301 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:16:16.945322 systemd-tmpfiles[1237]: Skipping /boot Dec 13 01:16:17.000476 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Dec 13 01:16:17.070093 zram_generator::config[1267]: No configuration found. Dec 13 01:16:17.179922 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1294) Dec 13 01:16:17.195800 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1294) Dec 13 01:16:17.346937 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1285) Dec 13 01:16:17.370905 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:16:17.384106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:17.407261 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 01:16:17.546699 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:16:17.515933 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:16:17.517058 systemd[1]: Reloading finished in 604 ms. Dec 13 01:16:17.539521 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:16:17.558584 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:16:17.565893 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 01:16:17.589463 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:16:17.604525 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:16:17.620993 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:16:17.638901 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:16:17.661504 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:16:17.673443 systemd[1]: Finished ensure-sysext.service. Dec 13 01:16:17.689145 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:16:17.701233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:17.710128 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:17.733042 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:16:17.744372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:16:17.751757 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:16:17.765780 augenrules[1358]: No rules Dec 13 01:16:17.770363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:16:17.790979 lvm[1354]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:17.792076 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:16:17.809610 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:16:17.830113 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:16:17.846366 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:16:17.855170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:16:17.861981 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:16:17.878568 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:16:17.898109 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:16:17.910133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:16:17.919031 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:16:17.938142 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:16:17.956713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:16:17.967004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:16:17.968773 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:17.979571 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:16:17.991441 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:16:18.003480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:16:18.003982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:16:18.004463 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:16:18.004703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:16:18.005167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:16:18.005386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:16:18.005837 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:16:18.006094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:16:18.010576 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:16:18.011205 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:16:18.022624 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:16:18.032089 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:16:18.032949 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:16:18.038091 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:16:18.040104 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 13 01:16:18.041001 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:16:18.041102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:16:18.047170 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:16:18.052124 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:16:18.052203 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:16:18.070444 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:16:18.099675 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:16:18.124583 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:16:18.145957 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 13 01:16:18.159961 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:16:18.171303 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:16:18.256679 systemd-networkd[1371]: lo: Link UP Dec 13 01:16:18.256700 systemd-networkd[1371]: lo: Gained carrier Dec 13 01:16:18.259241 systemd-networkd[1371]: Enumeration completed Dec 13 01:16:18.259930 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:18.259941 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:16:18.260525 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:16:18.261106 systemd-networkd[1371]: eth0: Link UP Dec 13 01:16:18.261113 systemd-networkd[1371]: eth0: Gained carrier Dec 13 01:16:18.261137 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:16:18.269822 systemd-resolved[1372]: Positive Trust Anchors: Dec 13 01:16:18.269841 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:16:18.269921 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:16:18.271003 systemd-networkd[1371]: eth0: DHCPv4 address 10.128.0.51/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:16:18.276900 systemd-resolved[1372]: Defaulting to hostname 'linux'. Dec 13 01:16:18.277305 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:16:18.290180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:16:18.300173 systemd[1]: Reached target network.target - Network. Dec 13 01:16:18.309055 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:16:18.321091 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:16:18.332178 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:16:18.343101 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:16:18.354314 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:16:18.364225 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:16:18.375056 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:16:18.387038 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:16:18.387100 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:16:18.396049 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:16:18.407483 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:16:18.418947 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:16:18.441993 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:16:18.453006 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:16:18.463228 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:16:18.473053 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:16:18.482090 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:18.482153 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:16:18.492079 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:16:18.503896 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:16:18.522190 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:16:18.545710 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:16:18.571169 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:16:18.575085 jq[1422]: false Dec 13 01:16:18.581013 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:16:18.595167 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:16:18.617334 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:16:18.635076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:16:18.636628 coreos-metadata[1420]: Dec 13 01:16:18.634 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 01:16:18.639532 coreos-metadata[1420]: Dec 13 01:16:18.639 INFO Fetch successful Dec 13 01:16:18.639532 coreos-metadata[1420]: Dec 13 01:16:18.639 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 01:16:18.641103 coreos-metadata[1420]: Dec 13 01:16:18.641 INFO Fetch successful Dec 13 01:16:18.641223 coreos-metadata[1420]: Dec 13 01:16:18.641 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 01:16:18.651084 coreos-metadata[1420]: Dec 13 01:16:18.646 INFO Fetch successful Dec 13 01:16:18.651084 coreos-metadata[1420]: Dec 13 01:16:18.647 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 01:16:18.651084 coreos-metadata[1420]: Dec 13 01:16:18.648 INFO Fetch successful Dec 13 01:16:18.652097 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:16:18.654740 extend-filesystems[1423]: Found loop4 Dec 13 01:16:18.654740 extend-filesystems[1423]: Found loop5 Dec 13 01:16:18.654740 extend-filesystems[1423]: Found loop6 Dec 13 01:16:18.654740 extend-filesystems[1423]: Found loop7 Dec 13 01:16:18.654740 extend-filesystems[1423]: Found sda Dec 13 01:16:18.654740 extend-filesystems[1423]: Found sda1 Dec 13 01:16:18.654740 extend-filesystems[1423]: Found sda2 Dec 13 01:16:18.716337 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 01:16:18.670467 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:16:18.695044 dbus-daemon[1421]: [system] SELinux support is enabled Dec 13 01:16:18.716999 extend-filesystems[1423]: Found sda3 Dec 13 01:16:18.716999 extend-filesystems[1423]: Found usr Dec 13 01:16:18.716999 extend-filesystems[1423]: Found sda4 Dec 13 01:16:18.716999 extend-filesystems[1423]: Found sda6 Dec 13 01:16:18.716999 extend-filesystems[1423]: Found sda7 Dec 13 01:16:18.716999 extend-filesystems[1423]: Found sda9 Dec 13 01:16:18.716999 extend-filesystems[1423]: Checking size of /dev/sda9 Dec 13 01:16:18.716999 extend-filesystems[1423]: Resized partition /dev/sda9 Dec 13 01:16:18.816227 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 01:16:18.816272 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1294) Dec 13 01:16:18.711737 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:16:18.702204 dbus-daemon[1421]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1371 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:16:18.816641 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:16:18.816641 extend-filesystems[1444]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:16:18.816641 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 01:16:18.816641 extend-filesystems[1444]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: ---------------------------------------------------- Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: corporation. Support and training for ntp-4 are Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: available at https://www.nwtime.org/support Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: ---------------------------------------------------- Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: proto: precision = 0.092 usec (-23) Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: basedate set to 2024-11-30 Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: gps base set to 2024-12-01 (week 2343) Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: Listen normally on 3 eth0 10.128.0.51:123 Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: Listen normally on 4 lo [::1]:123 Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:33%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:33%2#123 Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:33%2 Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:16:18.817077 ntpd[1428]: 13 Dec 01:16:18 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:16:18.749015 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 01:16:18.722399 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:16:18.826246 extend-filesystems[1423]: Resized filesystem in /dev/sda9 Dec 13 01:16:18.750471 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:16:18.722432 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:16:18.757100 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:16:18.722447 ntpd[1428]: ---------------------------------------------------- Dec 13 01:16:18.792033 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:16:18.722462 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:16:18.897498 update_engine[1449]: I20241213 01:16:18.847545 1449 main.cc:92] Flatcar Update Engine starting Dec 13 01:16:18.897498 update_engine[1449]: I20241213 01:16:18.851548 1449 update_check_scheduler.cc:74] Next update check in 11m18s Dec 13 01:16:18.815837 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:16:18.722477 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:16:18.890595 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:16:18.722491 ntpd[1428]: corporation. Support and training for ntp-4 are Dec 13 01:16:18.890624 systemd-logind[1445]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 01:16:18.722507 ntpd[1428]: available at https://www.nwtime.org/support Dec 13 01:16:18.890655 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:16:18.722535 ntpd[1428]: ---------------------------------------------------- Dec 13 01:16:18.893576 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:16:18.725840 ntpd[1428]: proto: precision = 0.092 usec (-23) Dec 13 01:16:18.893832 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:16:18.734525 ntpd[1428]: basedate set to 2024-11-30 Dec 13 01:16:18.894291 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:16:18.734560 ntpd[1428]: gps base set to 2024-12-01 (week 2343) Dec 13 01:16:18.894504 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:16:18.738808 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:16:18.896358 systemd-logind[1445]: New seat seat0. Dec 13 01:16:18.738895 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:16:18.912654 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:16:18.741926 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:16:18.742012 ntpd[1428]: Listen normally on 3 eth0 10.128.0.51:123 Dec 13 01:16:18.742208 ntpd[1428]: Listen normally on 4 lo [::1]:123 Dec 13 01:16:18.742286 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:33%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:16:18.742319 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:33%2#123 Dec 13 01:16:18.742343 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:33%2 Dec 13 01:16:18.742391 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Dec 13 01:16:18.744236 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:16:18.744276 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:16:18.917887 jq[1453]: true Dec 13 01:16:18.922515 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:16:18.922786 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:16:18.938546 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:16:18.941107 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:16:18.975914 jq[1458]: true Dec 13 01:16:18.996613 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:16:19.024794 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:16:19.048767 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:16:19.089043 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:16:19.109562 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:16:19.114782 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:16:19.115335 tar[1457]: linux-amd64/helm Dec 13 01:16:19.123661 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:16:19.124029 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:16:19.124306 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:16:19.152981 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:16:19.163193 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:16:19.163550 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:16:19.184222 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:16:19.203713 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:16:19.228289 systemd[1]: Starting sshkeys.service... Dec 13 01:16:19.294707 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:16:19.321824 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:16:19.395208 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:16:19.397026 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:16:19.403391 dbus-daemon[1421]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1490 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:16:19.425972 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:16:19.509161 coreos-metadata[1494]: Dec 13 01:16:19.508 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 01:16:19.515219 coreos-metadata[1494]: Dec 13 01:16:19.514 INFO Fetch failed with 404: resource not found Dec 13 01:16:19.515219 coreos-metadata[1494]: Dec 13 01:16:19.515 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 01:16:19.517724 coreos-metadata[1494]: Dec 13 01:16:19.517 INFO Fetch successful Dec 13 01:16:19.517724 coreos-metadata[1494]: Dec 13 01:16:19.517 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 01:16:19.520208 coreos-metadata[1494]: Dec 13 01:16:19.518 INFO Fetch failed with 404: resource not found Dec 13 01:16:19.520208 coreos-metadata[1494]: Dec 13 01:16:19.519 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 01:16:19.521641 coreos-metadata[1494]: Dec 13 01:16:19.521 INFO Fetch failed with 404: resource not found Dec 13 01:16:19.521641 coreos-metadata[1494]: Dec 13 01:16:19.521 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 01:16:19.522651 coreos-metadata[1494]: Dec 13 01:16:19.522 INFO Fetch successful Dec 13 01:16:19.530453 unknown[1494]: wrote ssh authorized keys file for user: core Dec 13 01:16:19.589027 polkitd[1497]: Started polkitd version 121 Dec 13 01:16:19.607817 polkitd[1497]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:16:19.616328 polkitd[1497]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:16:19.622878 polkitd[1497]: Finished loading, compiling and executing 2 rules Dec 13 01:16:19.625168 update-ssh-keys[1506]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:16:19.626040 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:16:19.627683 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:16:19.628569 polkitd[1497]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:16:19.637835 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:16:19.648842 systemd[1]: Finished sshkeys.service. Dec 13 01:16:19.697636 systemd-hostnamed[1490]: Hostname set to (transient) Dec 13 01:16:19.699066 systemd-resolved[1372]: System hostname changed to 'ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal'. Dec 13 01:16:19.723083 ntpd[1428]: bind(24) AF_INET6 fe80::4001:aff:fe80:33%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:16:19.725431 ntpd[1428]: 13 Dec 01:16:19 ntpd[1428]: bind(24) AF_INET6 fe80::4001:aff:fe80:33%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:16:19.725431 ntpd[1428]: 13 Dec 01:16:19 ntpd[1428]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:33%2#123 Dec 13 01:16:19.725431 ntpd[1428]: 13 Dec 01:16:19 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:33%2 Dec 13 01:16:19.723141 ntpd[1428]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:33%2#123 Dec 13 01:16:19.723164 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:33%2 Dec 13 01:16:19.747090 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:16:19.807912 systemd-networkd[1371]: eth0: Gained IPv6LL Dec 13 01:16:19.818010 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:16:19.829840 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:16:19.851188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:19.870283 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:16:19.884962 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 13 01:16:19.923970 containerd[1459]: time="2024-12-13T01:16:19.918827058Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:16:19.944390 init.sh[1523]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 01:16:19.946676 init.sh[1523]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 01:16:19.946676 init.sh[1523]: + /usr/bin/google_instance_setup Dec 13 01:16:19.990687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:16:20.082260 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:16:20.104433 containerd[1459]: time="2024-12-13T01:16:20.102481726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:20.109625 containerd[1459]: time="2024-12-13T01:16:20.109568295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:20.109793 containerd[1459]: time="2024-12-13T01:16:20.109770620Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:16:20.109911 containerd[1459]: time="2024-12-13T01:16:20.109890913Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:16:20.110822 containerd[1459]: time="2024-12-13T01:16:20.110785377Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:16:20.111030 containerd[1459]: time="2024-12-13T01:16:20.111003995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:20.111300 containerd[1459]: time="2024-12-13T01:16:20.111268550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:20.111901 containerd[1459]: time="2024-12-13T01:16:20.111843603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:20.112391 containerd[1459]: time="2024-12-13T01:16:20.112355372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:20.112943 containerd[1459]: time="2024-12-13T01:16:20.112911451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:20.113074 containerd[1459]: time="2024-12-13T01:16:20.113047661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:20.113281 containerd[1459]: time="2024-12-13T01:16:20.113254834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:20.113524 containerd[1459]: time="2024-12-13T01:16:20.113493343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:20.114469 containerd[1459]: time="2024-12-13T01:16:20.114440131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:16:20.115388 containerd[1459]: time="2024-12-13T01:16:20.115350970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:16:20.116052 containerd[1459]: time="2024-12-13T01:16:20.115982403Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:16:20.116461 containerd[1459]: time="2024-12-13T01:16:20.116272468Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:16:20.118385 containerd[1459]: time="2024-12-13T01:16:20.116934934Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:16:20.128685 containerd[1459]: time="2024-12-13T01:16:20.128616304Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:16:20.129480 containerd[1459]: time="2024-12-13T01:16:20.128906115Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:16:20.129480 containerd[1459]: time="2024-12-13T01:16:20.129007508Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:16:20.129480 containerd[1459]: time="2024-12-13T01:16:20.129037807Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:16:20.129480 containerd[1459]: time="2024-12-13T01:16:20.129084582Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:16:20.129480 containerd[1459]: time="2024-12-13T01:16:20.129353782Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:16:20.131117 containerd[1459]: time="2024-12-13T01:16:20.131086047Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:16:20.131472 containerd[1459]: time="2024-12-13T01:16:20.131443909Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:16:20.132687 containerd[1459]: time="2024-12-13T01:16:20.132656604Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:16:20.132808 containerd[1459]: time="2024-12-13T01:16:20.132788185Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:16:20.132919 containerd[1459]: time="2024-12-13T01:16:20.132900117Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:16:20.133005 containerd[1459]: time="2024-12-13T01:16:20.132989826Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.133907092Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.133946868Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134013441Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134036061Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134087621Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134115195Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134154554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134179218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134201775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134226505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134248889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134272350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134291478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.134593 containerd[1459]: time="2024-12-13T01:16:20.134313920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134333748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134357476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134379240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134400646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134421373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134444999Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134480151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134502372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.134519981Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.135344881Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.135594246Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.135623414Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:16:20.135227 containerd[1459]: time="2024-12-13T01:16:20.135658567Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:16:20.137729 containerd[1459]: time="2024-12-13T01:16:20.135677673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.137729 containerd[1459]: time="2024-12-13T01:16:20.135710317Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:16:20.137729 containerd[1459]: time="2024-12-13T01:16:20.135737077Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:16:20.137729 containerd[1459]: time="2024-12-13T01:16:20.135753544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:16:20.139470 containerd[1459]: time="2024-12-13T01:16:20.138317055Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:16:20.139470 containerd[1459]: time="2024-12-13T01:16:20.138413773Z" level=info msg="Connect containerd service" Dec 13 01:16:20.139470 containerd[1459]: time="2024-12-13T01:16:20.138474692Z" level=info msg="using legacy CRI server" Dec 13 01:16:20.139470 containerd[1459]: time="2024-12-13T01:16:20.138487787Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:16:20.139470 containerd[1459]: time="2024-12-13T01:16:20.138657189Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:16:20.145850 containerd[1459]: time="2024-12-13T01:16:20.143785605Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:16:20.145850 containerd[1459]: time="2024-12-13T01:16:20.144617554Z" level=info msg="Start subscribing containerd event" Dec 13 01:16:20.145850 containerd[1459]: time="2024-12-13T01:16:20.144704023Z" level=info msg="Start recovering state" Dec 13 01:16:20.145850 containerd[1459]: time="2024-12-13T01:16:20.144804318Z" level=info msg="Start event monitor" Dec 13 01:16:20.145850 containerd[1459]: time="2024-12-13T01:16:20.144829256Z" level=info msg="Start snapshots syncer" Dec 13 01:16:20.145850 containerd[1459]: time="2024-12-13T01:16:20.144844004Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:16:20.146702 containerd[1459]: time="2024-12-13T01:16:20.146671427Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:16:20.147472 containerd[1459]: time="2024-12-13T01:16:20.147234779Z" level=info msg="Start streaming server" Dec 13 01:16:20.148869 containerd[1459]: time="2024-12-13T01:16:20.148827168Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:16:20.151094 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:16:20.152027 containerd[1459]: time="2024-12-13T01:16:20.151248393Z" level=info msg="containerd successfully booted in 0.236628s" Dec 13 01:16:20.173159 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:16:20.191284 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:16:20.210051 systemd[1]: Started sshd@0-10.128.0.51:22-147.75.109.163:39852.service - OpenSSH per-connection server daemon (147.75.109.163:39852). Dec 13 01:16:20.228351 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:16:20.228704 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:16:20.252381 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:16:20.303501 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:16:20.327442 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:16:20.344339 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:16:20.354313 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:16:20.619715 sshd[1544]: Accepted publickey for core from 147.75.109.163 port 39852 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:20.625379 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:20.651251 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:16:20.670278 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:16:20.695111 systemd-logind[1445]: New session 1 of user core. Dec 13 01:16:20.715276 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:16:20.722485 tar[1457]: linux-amd64/LICENSE Dec 13 01:16:20.722485 tar[1457]: linux-amd64/README.md Dec 13 01:16:20.751252 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:16:20.760847 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:16:20.802489 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:16:20.949700 instance-setup[1528]: INFO Running google_set_multiqueue. Dec 13 01:16:20.986116 instance-setup[1528]: INFO Set channels for eth0 to 2. Dec 13 01:16:20.997477 instance-setup[1528]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 01:16:21.002123 instance-setup[1528]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 01:16:21.003930 instance-setup[1528]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 01:16:21.008063 instance-setup[1528]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 01:16:21.009152 instance-setup[1528]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 01:16:21.011544 instance-setup[1528]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 01:16:21.012189 instance-setup[1528]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 01:16:21.015484 instance-setup[1528]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 01:16:21.031147 instance-setup[1528]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:16:21.035728 systemd[1559]: Queued start job for default target default.target. Dec 13 01:16:21.036831 instance-setup[1528]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:16:21.038646 instance-setup[1528]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 01:16:21.038702 instance-setup[1528]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 01:16:21.044092 systemd[1559]: Created slice app.slice - User Application Slice. Dec 13 01:16:21.044140 systemd[1559]: Reached target paths.target - Paths. Dec 13 01:16:21.044166 systemd[1559]: Reached target timers.target - Timers. Dec 13 01:16:21.047933 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:16:21.067713 init.sh[1523]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 01:16:21.080683 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:16:21.080923 systemd[1559]: Reached target sockets.target - Sockets. Dec 13 01:16:21.080954 systemd[1559]: Reached target basic.target - Basic System. Dec 13 01:16:21.081026 systemd[1559]: Reached target default.target - Main User Target. Dec 13 01:16:21.081082 systemd[1559]: Startup finished in 265ms. Dec 13 01:16:21.081278 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:16:21.101130 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:16:21.241816 startup-script[1595]: INFO Starting startup scripts. Dec 13 01:16:21.249310 startup-script[1595]: INFO No startup scripts found in metadata. Dec 13 01:16:21.249391 startup-script[1595]: INFO Finished running startup scripts. Dec 13 01:16:21.296291 init.sh[1523]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 01:16:21.296291 init.sh[1523]: + daemon_pids=() Dec 13 01:16:21.296291 init.sh[1523]: + for d in accounts clock_skew network Dec 13 01:16:21.296291 init.sh[1523]: + daemon_pids+=($!) Dec 13 01:16:21.296291 init.sh[1523]: + for d in accounts clock_skew network Dec 13 01:16:21.296291 init.sh[1523]: + daemon_pids+=($!) Dec 13 01:16:21.296291 init.sh[1523]: + for d in accounts clock_skew network Dec 13 01:16:21.296291 init.sh[1523]: + daemon_pids+=($!) Dec 13 01:16:21.296291 init.sh[1523]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 01:16:21.296291 init.sh[1523]: + /usr/bin/systemd-notify --ready Dec 13 01:16:21.297252 init.sh[1601]: + /usr/bin/google_accounts_daemon Dec 13 01:16:21.300155 init.sh[1602]: + /usr/bin/google_clock_skew_daemon Dec 13 01:16:21.303143 init.sh[1603]: + /usr/bin/google_network_daemon Dec 13 01:16:21.348427 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 13 01:16:21.365772 init.sh[1523]: + wait -n 1601 1602 1603 Dec 13 01:16:21.371536 systemd[1]: Started sshd@1-10.128.0.51:22-147.75.109.163:39854.service - OpenSSH per-connection server daemon (147.75.109.163:39854). Dec 13 01:16:21.774155 sshd[1606]: Accepted publickey for core from 147.75.109.163 port 39854 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:21.773849 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:21.786803 systemd-logind[1445]: New session 2 of user core. Dec 13 01:16:21.789093 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:16:21.811333 google-clock-skew[1602]: INFO Starting Google Clock Skew daemon. Dec 13 01:16:21.820808 google-clock-skew[1602]: INFO Clock drift token has changed: 0. Dec 13 01:16:21.822477 google-networking[1603]: INFO Starting Google Networking daemon. Dec 13 01:16:21.870172 groupadd[1616]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 01:16:21.874849 groupadd[1616]: group added to /etc/gshadow: name=google-sudoers Dec 13 01:16:21.938318 groupadd[1616]: new group: name=google-sudoers, GID=1000 Dec 13 01:16:21.976732 google-accounts[1601]: INFO Starting Google Accounts daemon. Dec 13 01:16:21.995436 google-accounts[1601]: WARNING OS Login not installed. Dec 13 01:16:21.997358 sshd[1606]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:21.998756 google-accounts[1601]: INFO Creating a new user account for 0. Dec 13 01:16:22.006345 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:16:22.008079 init.sh[1629]: useradd: invalid user name '0': use --badname to ignore Dec 13 01:16:22.007543 systemd[1]: sshd@1-10.128.0.51:22-147.75.109.163:39854.service: Deactivated successfully. Dec 13 01:16:22.009144 google-accounts[1601]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 01:16:22.014078 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:16:22.022536 systemd-logind[1445]: Removed session 2. Dec 13 01:16:22.035133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:22.035554 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:22.059954 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:16:22.078701 systemd[1]: Started sshd@2-10.128.0.51:22-147.75.109.163:39856.service - OpenSSH per-connection server daemon (147.75.109.163:39856). Dec 13 01:16:22.079421 systemd[1]: Startup finished in 1.159s (kernel) + 9.852s (initrd) + 9.494s (userspace) = 20.506s. Dec 13 01:16:22.000472 google-clock-skew[1602]: INFO Synced system time with hardware clock. Dec 13 01:16:22.015514 systemd-journald[1114]: Time jumped backwards, rotating. Dec 13 01:16:22.002260 systemd-resolved[1372]: Clock change detected. Flushing caches. Dec 13 01:16:22.044404 sshd[1637]: Accepted publickey for core from 147.75.109.163 port 39856 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:22.046499 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:22.055194 systemd-logind[1445]: New session 3 of user core. Dec 13 01:16:22.057422 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:16:22.261189 sshd[1637]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:22.269380 systemd[1]: sshd@2-10.128.0.51:22-147.75.109.163:39856.service: Deactivated successfully. Dec 13 01:16:22.272092 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:16:22.273390 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:16:22.275520 systemd-logind[1445]: Removed session 3. Dec 13 01:16:22.375494 ntpd[1428]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:33%2]:123 Dec 13 01:16:22.376091 ntpd[1428]: 13 Dec 01:16:22 ntpd[1428]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:33%2]:123 Dec 13 01:16:22.730381 kubelet[1635]: E1213 01:16:22.730305 1635 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:22.733612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:22.733873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:22.734456 systemd[1]: kubelet.service: Consumed 1.243s CPU time. Dec 13 01:16:32.322711 systemd[1]: Started sshd@3-10.128.0.51:22-147.75.109.163:40370.service - OpenSSH per-connection server daemon (147.75.109.163:40370). Dec 13 01:16:32.618184 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 40370 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:32.620388 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:32.627753 systemd-logind[1445]: New session 4 of user core. Dec 13 01:16:32.634493 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:16:32.791190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:16:32.800610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:32.833748 sshd[1656]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:32.840399 systemd[1]: sshd@3-10.128.0.51:22-147.75.109.163:40370.service: Deactivated successfully. Dec 13 01:16:32.846019 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:16:32.848127 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:16:32.849998 systemd-logind[1445]: Removed session 4. Dec 13 01:16:32.892769 systemd[1]: Started sshd@4-10.128.0.51:22-147.75.109.163:40378.service - OpenSSH per-connection server daemon (147.75.109.163:40378). Dec 13 01:16:33.143552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:33.146388 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:33.197941 sshd[1666]: Accepted publickey for core from 147.75.109.163 port 40378 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:33.200670 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:33.211297 systemd-logind[1445]: New session 5 of user core. Dec 13 01:16:33.217610 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:16:33.231644 kubelet[1673]: E1213 01:16:33.231494 1673 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:33.237256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:33.237529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:33.409675 sshd[1666]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:33.415510 systemd[1]: sshd@4-10.128.0.51:22-147.75.109.163:40378.service: Deactivated successfully. Dec 13 01:16:33.418776 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:16:33.420971 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:16:33.422613 systemd-logind[1445]: Removed session 5. Dec 13 01:16:33.471877 systemd[1]: Started sshd@5-10.128.0.51:22-147.75.109.163:40394.service - OpenSSH per-connection server daemon (147.75.109.163:40394). Dec 13 01:16:33.771464 sshd[1686]: Accepted publickey for core from 147.75.109.163 port 40394 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:33.773678 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:33.781222 systemd-logind[1445]: New session 6 of user core. Dec 13 01:16:33.787544 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:16:33.987630 sshd[1686]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:33.991946 systemd[1]: sshd@5-10.128.0.51:22-147.75.109.163:40394.service: Deactivated successfully. Dec 13 01:16:33.994424 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:16:33.996154 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:16:33.997592 systemd-logind[1445]: Removed session 6. Dec 13 01:16:34.043595 systemd[1]: Started sshd@6-10.128.0.51:22-147.75.109.163:40408.service - OpenSSH per-connection server daemon (147.75.109.163:40408). Dec 13 01:16:34.337365 sshd[1693]: Accepted publickey for core from 147.75.109.163 port 40408 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:34.339231 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:34.345744 systemd-logind[1445]: New session 7 of user core. Dec 13 01:16:34.355498 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:16:34.534815 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:16:34.535353 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:34.552042 sudo[1696]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:34.595462 sshd[1693]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:34.600404 systemd[1]: sshd@6-10.128.0.51:22-147.75.109.163:40408.service: Deactivated successfully. Dec 13 01:16:34.602921 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:16:34.604700 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:16:34.606320 systemd-logind[1445]: Removed session 7. Dec 13 01:16:34.653591 systemd[1]: Started sshd@7-10.128.0.51:22-147.75.109.163:40420.service - OpenSSH per-connection server daemon (147.75.109.163:40420). Dec 13 01:16:34.945417 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 40420 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:34.947388 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:34.953664 systemd-logind[1445]: New session 8 of user core. Dec 13 01:16:34.960472 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:16:35.123899 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:16:35.124473 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:35.129391 sudo[1705]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:35.142707 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:16:35.143191 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:35.165818 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:35.168152 auditctl[1708]: No rules Dec 13 01:16:35.168661 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:16:35.168928 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:35.177196 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:16:35.209314 augenrules[1726]: No rules Dec 13 01:16:35.210244 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:16:35.212553 sudo[1704]: pam_unix(sudo:session): session closed for user root Dec 13 01:16:35.255408 sshd[1701]: pam_unix(sshd:session): session closed for user core Dec 13 01:16:35.259833 systemd[1]: sshd@7-10.128.0.51:22-147.75.109.163:40420.service: Deactivated successfully. Dec 13 01:16:35.262174 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:16:35.264028 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:16:35.265610 systemd-logind[1445]: Removed session 8. Dec 13 01:16:35.314921 systemd[1]: Started sshd@8-10.128.0.51:22-147.75.109.163:40434.service - OpenSSH per-connection server daemon (147.75.109.163:40434). Dec 13 01:16:35.599531 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 40434 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:16:35.601351 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:16:35.607656 systemd-logind[1445]: New session 9 of user core. Dec 13 01:16:35.616465 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:16:35.778892 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:16:35.779531 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:16:36.228595 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:16:36.240935 (dockerd)[1753]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:16:36.701464 dockerd[1753]: time="2024-12-13T01:16:36.701381534Z" level=info msg="Starting up" Dec 13 01:16:36.858764 dockerd[1753]: time="2024-12-13T01:16:36.858660340Z" level=info msg="Loading containers: start." Dec 13 01:16:37.012361 kernel: Initializing XFRM netlink socket Dec 13 01:16:37.121565 systemd-networkd[1371]: docker0: Link UP Dec 13 01:16:37.142402 dockerd[1753]: time="2024-12-13T01:16:37.142352504Z" level=info msg="Loading containers: done." Dec 13 01:16:37.165445 dockerd[1753]: time="2024-12-13T01:16:37.165381269Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:16:37.165618 dockerd[1753]: time="2024-12-13T01:16:37.165511615Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:16:37.165697 dockerd[1753]: time="2024-12-13T01:16:37.165657925Z" level=info msg="Daemon has completed initialization" Dec 13 01:16:37.207646 dockerd[1753]: time="2024-12-13T01:16:37.207513139Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:16:37.208103 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:16:38.279185 containerd[1459]: time="2024-12-13T01:16:38.279114660Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:16:38.739109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168126893.mount: Deactivated successfully. Dec 13 01:16:40.467397 containerd[1459]: time="2024-12-13T01:16:40.467310460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:40.469187 containerd[1459]: time="2024-12-13T01:16:40.469139688Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32682270" Dec 13 01:16:40.470737 containerd[1459]: time="2024-12-13T01:16:40.470667647Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:40.480257 containerd[1459]: time="2024-12-13T01:16:40.479352075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:40.481746 containerd[1459]: time="2024-12-13T01:16:40.481690331Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.202522784s" Dec 13 01:16:40.481877 containerd[1459]: time="2024-12-13T01:16:40.481753916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:16:40.512098 containerd[1459]: time="2024-12-13T01:16:40.512047299Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:16:42.136479 containerd[1459]: time="2024-12-13T01:16:42.136388051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:42.138292 containerd[1459]: time="2024-12-13T01:16:42.138228722Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29608343" Dec 13 01:16:42.139633 containerd[1459]: time="2024-12-13T01:16:42.139548059Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:42.145007 containerd[1459]: time="2024-12-13T01:16:42.144631728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:42.146593 containerd[1459]: time="2024-12-13T01:16:42.146383972Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.634278238s" Dec 13 01:16:42.146593 containerd[1459]: time="2024-12-13T01:16:42.146434932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:16:42.178811 containerd[1459]: time="2024-12-13T01:16:42.178761236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:16:43.301955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:16:43.309653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:43.334848 containerd[1459]: time="2024-12-13T01:16:43.334798948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:43.337911 containerd[1459]: time="2024-12-13T01:16:43.337854995Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17784951" Dec 13 01:16:43.339415 containerd[1459]: time="2024-12-13T01:16:43.339380121Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:43.425334 containerd[1459]: time="2024-12-13T01:16:43.425267180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:43.430198 containerd[1459]: time="2024-12-13T01:16:43.430140292Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.251323016s" Dec 13 01:16:43.430411 containerd[1459]: time="2024-12-13T01:16:43.430386946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:16:43.475976 containerd[1459]: time="2024-12-13T01:16:43.475829082Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:16:43.621512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:43.621824 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:16:43.685528 kubelet[1978]: E1213 01:16:43.685461 1978 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:16:43.689175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:16:43.689437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:16:44.769737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468643835.mount: Deactivated successfully. Dec 13 01:16:45.344141 containerd[1459]: time="2024-12-13T01:16:45.344060407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:45.345577 containerd[1459]: time="2024-12-13T01:16:45.345501129Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29059365" Dec 13 01:16:45.347261 containerd[1459]: time="2024-12-13T01:16:45.347213400Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:45.350973 containerd[1459]: time="2024-12-13T01:16:45.350858360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:45.352501 containerd[1459]: time="2024-12-13T01:16:45.351983701Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.875861699s" Dec 13 01:16:45.352501 containerd[1459]: time="2024-12-13T01:16:45.352032537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:16:45.383231 containerd[1459]: time="2024-12-13T01:16:45.383169248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:16:45.795552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405746863.mount: Deactivated successfully. Dec 13 01:16:46.858923 containerd[1459]: time="2024-12-13T01:16:46.858843755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:46.860576 containerd[1459]: time="2024-12-13T01:16:46.860516572Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Dec 13 01:16:46.862110 containerd[1459]: time="2024-12-13T01:16:46.862024206Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:46.866922 containerd[1459]: time="2024-12-13T01:16:46.866858781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:46.870165 containerd[1459]: time="2024-12-13T01:16:46.869005480Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.485760667s" Dec 13 01:16:46.870165 containerd[1459]: time="2024-12-13T01:16:46.869056327Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:16:46.900948 containerd[1459]: time="2024-12-13T01:16:46.900901371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:16:47.275123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652343973.mount: Deactivated successfully. Dec 13 01:16:47.285116 containerd[1459]: time="2024-12-13T01:16:47.285037398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:47.286644 containerd[1459]: time="2024-12-13T01:16:47.286578712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Dec 13 01:16:47.288182 containerd[1459]: time="2024-12-13T01:16:47.288100388Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:47.293454 containerd[1459]: time="2024-12-13T01:16:47.293371121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:47.295275 containerd[1459]: time="2024-12-13T01:16:47.294453849Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 393.499112ms" Dec 13 01:16:47.295275 containerd[1459]: time="2024-12-13T01:16:47.294499795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:16:47.327102 containerd[1459]: time="2024-12-13T01:16:47.326771495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:16:47.752856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633728661.mount: Deactivated successfully. Dec 13 01:16:49.384482 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:16:50.424076 containerd[1459]: time="2024-12-13T01:16:50.423991275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:50.425820 containerd[1459]: time="2024-12-13T01:16:50.425747208Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Dec 13 01:16:50.427257 containerd[1459]: time="2024-12-13T01:16:50.427172021Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:50.431409 containerd[1459]: time="2024-12-13T01:16:50.431326363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:16:50.433674 containerd[1459]: time="2024-12-13T01:16:50.433592906Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.106767924s" Dec 13 01:16:50.435618 containerd[1459]: time="2024-12-13T01:16:50.433806603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:16:53.350394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:53.357608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:53.401218 systemd[1]: Reloading requested from client PID 2168 ('systemctl') (unit session-9.scope)... Dec 13 01:16:53.401248 systemd[1]: Reloading... Dec 13 01:16:53.570697 zram_generator::config[2214]: No configuration found. Dec 13 01:16:53.712917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:16:53.813655 systemd[1]: Reloading finished in 411 ms. Dec 13 01:16:53.880532 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:16:53.880660 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:16:53.881044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:53.887719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:16:54.098083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:16:54.111829 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:16:54.171500 kubelet[2260]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:16:54.171500 kubelet[2260]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:16:54.171500 kubelet[2260]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:16:54.172056 kubelet[2260]: I1213 01:16:54.171583 2260 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:16:54.551495 kubelet[2260]: I1213 01:16:54.551457 2260 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:16:54.551495 kubelet[2260]: I1213 01:16:54.551498 2260 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:16:54.551924 kubelet[2260]: I1213 01:16:54.551898 2260 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:16:54.579225 kubelet[2260]: I1213 01:16:54.578287 2260 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:16:54.579556 kubelet[2260]: E1213 01:16:54.579533 2260 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.605815 kubelet[2260]: I1213 01:16:54.605780 2260 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:16:54.606309 kubelet[2260]: I1213 01:16:54.606246 2260 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:16:54.606565 kubelet[2260]: I1213 01:16:54.606302 2260 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:16:54.606769 kubelet[2260]: I1213 01:16:54.606578 2260 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:16:54.606769 kubelet[2260]: I1213 01:16:54.606598 2260 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:16:54.606877 kubelet[2260]: I1213 01:16:54.606770 2260 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:16:54.608754 kubelet[2260]: I1213 01:16:54.608383 2260 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:16:54.608754 kubelet[2260]: I1213 01:16:54.608421 2260 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:16:54.608754 kubelet[2260]: I1213 01:16:54.608487 2260 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:16:54.608754 kubelet[2260]: I1213 01:16:54.608511 2260 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:16:54.615668 kubelet[2260]: W1213 01:16:54.615595 2260 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.616113 kubelet[2260]: E1213 01:16:54.615832 2260 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.616113 kubelet[2260]: W1213 01:16:54.615951 2260 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.616113 kubelet[2260]: E1213 01:16:54.616004 2260 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.616769 kubelet[2260]: I1213 01:16:54.616740 2260 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:16:54.618829 kubelet[2260]: I1213 01:16:54.618799 2260 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:16:54.620231 kubelet[2260]: W1213 01:16:54.619023 2260 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:16:54.620377 kubelet[2260]: I1213 01:16:54.620360 2260 server.go:1264] "Started kubelet" Dec 13 01:16:54.622469 kubelet[2260]: I1213 01:16:54.622425 2260 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:16:54.623914 kubelet[2260]: I1213 01:16:54.623763 2260 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:16:54.628030 kubelet[2260]: I1213 01:16:54.627664 2260 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:16:54.630293 kubelet[2260]: I1213 01:16:54.630230 2260 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:16:54.630663 kubelet[2260]: I1213 01:16:54.630642 2260 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:16:54.633239 kubelet[2260]: E1213 01:16:54.632987 2260 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal.181097aa64482330 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,UID:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 01:16:54.620324656 +0000 UTC m=+0.503165257,LastTimestamp:2024-12-13 01:16:54.620324656 +0000 UTC m=+0.503165257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,}" Dec 13 01:16:54.637633 kubelet[2260]: I1213 01:16:54.637593 2260 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:16:54.637785 kubelet[2260]: I1213 01:16:54.637763 2260 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:16:54.637879 kubelet[2260]: I1213 01:16:54.637861 2260 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:16:54.638462 kubelet[2260]: W1213 01:16:54.638398 2260 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.638564 kubelet[2260]: E1213 01:16:54.638477 2260 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.638967 kubelet[2260]: E1213 01:16:54.638906 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.51:6443: connect: connection refused" interval="200ms" Dec 13 01:16:54.641689 kubelet[2260]: I1213 01:16:54.641664 2260 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:16:54.642129 kubelet[2260]: I1213 01:16:54.641924 2260 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:16:54.643873 kubelet[2260]: I1213 01:16:54.643850 2260 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:16:54.658258 kubelet[2260]: E1213 01:16:54.657111 2260 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:16:54.659875 kubelet[2260]: I1213 01:16:54.659829 2260 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:16:54.661862 kubelet[2260]: I1213 01:16:54.661825 2260 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:16:54.661967 kubelet[2260]: I1213 01:16:54.661872 2260 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:16:54.661967 kubelet[2260]: I1213 01:16:54.661900 2260 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:16:54.662061 kubelet[2260]: E1213 01:16:54.661960 2260 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:16:54.671533 kubelet[2260]: W1213 01:16:54.671446 2260 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.672125 kubelet[2260]: E1213 01:16:54.672080 2260 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:54.691054 kubelet[2260]: I1213 01:16:54.691014 2260 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:16:54.691054 kubelet[2260]: I1213 01:16:54.691039 2260 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:16:54.691291 kubelet[2260]: I1213 01:16:54.691082 2260 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:16:54.694600 kubelet[2260]: I1213 01:16:54.694567 2260 policy_none.go:49] "None policy: Start" Dec 13 01:16:54.696120 kubelet[2260]: I1213 01:16:54.696062 2260 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:16:54.696120 kubelet[2260]: I1213 01:16:54.696128 2260 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:16:54.709723 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:16:54.721320 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:16:54.728515 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:16:54.736872 kubelet[2260]: I1213 01:16:54.736431 2260 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:16:54.737191 kubelet[2260]: I1213 01:16:54.737079 2260 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:16:54.737325 kubelet[2260]: I1213 01:16:54.737302 2260 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:16:54.740901 kubelet[2260]: E1213 01:16:54.740779 2260 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" not found" Dec 13 01:16:54.744357 kubelet[2260]: I1213 01:16:54.744305 2260 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.744839 kubelet[2260]: E1213 01:16:54.744804 2260 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.51:6443/api/v1/nodes\": dial tcp 10.128.0.51:6443: connect: connection refused" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.763327 kubelet[2260]: I1213 01:16:54.763187 2260 topology_manager.go:215] "Topology Admit Handler" podUID="0d735a90ad5c0ecd99a146e85c8e4f82" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.770708 kubelet[2260]: I1213 01:16:54.770531 2260 topology_manager.go:215] "Topology Admit Handler" podUID="aa62abe2f4abddb46b422ad671321a0a" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.785994 kubelet[2260]: I1213 01:16:54.785638 2260 topology_manager.go:215] "Topology Admit Handler" podUID="aaa6ac15d36c13ceb6816ed3f59189ab" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.795832 systemd[1]: Created slice kubepods-burstable-pod0d735a90ad5c0ecd99a146e85c8e4f82.slice - libcontainer container kubepods-burstable-pod0d735a90ad5c0ecd99a146e85c8e4f82.slice. Dec 13 01:16:54.816596 systemd[1]: Created slice kubepods-burstable-podaa62abe2f4abddb46b422ad671321a0a.slice - libcontainer container kubepods-burstable-podaa62abe2f4abddb46b422ad671321a0a.slice. Dec 13 01:16:54.825151 systemd[1]: Created slice kubepods-burstable-podaaa6ac15d36c13ceb6816ed3f59189ab.slice - libcontainer container kubepods-burstable-podaaa6ac15d36c13ceb6816ed3f59189ab.slice. Dec 13 01:16:54.839540 kubelet[2260]: E1213 01:16:54.839484 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.51:6443: connect: connection refused" interval="400ms" Dec 13 01:16:54.938937 kubelet[2260]: I1213 01:16:54.938878 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa62abe2f4abddb46b422ad671321a0a-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aa62abe2f4abddb46b422ad671321a0a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.939348 kubelet[2260]: I1213 01:16:54.938947 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa62abe2f4abddb46b422ad671321a0a-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aa62abe2f4abddb46b422ad671321a0a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.939348 kubelet[2260]: I1213 01:16:54.938984 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa62abe2f4abddb46b422ad671321a0a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aa62abe2f4abddb46b422ad671321a0a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.939348 kubelet[2260]: I1213 01:16:54.939024 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.939348 kubelet[2260]: I1213 01:16:54.939053 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.939501 kubelet[2260]: I1213 01:16:54.939082 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d735a90ad5c0ecd99a146e85c8e4f82-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"0d735a90ad5c0ecd99a146e85c8e4f82\") " pod="kube-system/kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.939501 kubelet[2260]: I1213 01:16:54.939122 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.939501 kubelet[2260]: I1213 01:16:54.939148 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.939501 kubelet[2260]: I1213 01:16:54.939190 2260 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.951466 kubelet[2260]: I1213 01:16:54.951419 2260 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:54.951890 kubelet[2260]: E1213 01:16:54.951828 2260 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.51:6443/api/v1/nodes\": dial tcp 10.128.0.51:6443: connect: connection refused" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:55.113472 containerd[1459]: time="2024-12-13T01:16:55.113025236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,Uid:0d735a90ad5c0ecd99a146e85c8e4f82,Namespace:kube-system,Attempt:0,}" Dec 13 01:16:55.126593 containerd[1459]: time="2024-12-13T01:16:55.126492094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,Uid:aa62abe2f4abddb46b422ad671321a0a,Namespace:kube-system,Attempt:0,}" Dec 13 01:16:55.129664 containerd[1459]: time="2024-12-13T01:16:55.129371233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,Uid:aaa6ac15d36c13ceb6816ed3f59189ab,Namespace:kube-system,Attempt:0,}" Dec 13 01:16:55.240494 kubelet[2260]: E1213 01:16:55.240408 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.51:6443: connect: connection refused" interval="800ms" Dec 13 01:16:55.358327 kubelet[2260]: I1213 01:16:55.358288 2260 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:55.358732 kubelet[2260]: E1213 01:16:55.358684 2260 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.51:6443/api/v1/nodes\": dial tcp 10.128.0.51:6443: connect: connection refused" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:55.511258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483037035.mount: Deactivated successfully. Dec 13 01:16:55.524573 containerd[1459]: time="2024-12-13T01:16:55.524487159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:16:55.526123 containerd[1459]: time="2024-12-13T01:16:55.526067638Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:16:55.527485 containerd[1459]: time="2024-12-13T01:16:55.527366166Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Dec 13 01:16:55.528885 containerd[1459]: time="2024-12-13T01:16:55.528821209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:16:55.530381 containerd[1459]: time="2024-12-13T01:16:55.530287895Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:16:55.532252 containerd[1459]: time="2024-12-13T01:16:55.532181905Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:16:55.533384 containerd[1459]: time="2024-12-13T01:16:55.533297030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:16:55.537299 containerd[1459]: time="2024-12-13T01:16:55.537198448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:16:55.539234 containerd[1459]: time="2024-12-13T01:16:55.538365073Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 411.763486ms" Dec 13 01:16:55.540968 containerd[1459]: time="2024-12-13T01:16:55.540706505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 427.575486ms" Dec 13 01:16:55.555591 containerd[1459]: time="2024-12-13T01:16:55.555311086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.844089ms" Dec 13 01:16:55.679226 kubelet[2260]: W1213 01:16:55.679092 2260 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:55.679514 kubelet[2260]: E1213 01:16:55.679465 2260 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:55.698566 kubelet[2260]: W1213 01:16:55.698489 2260 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:55.698865 kubelet[2260]: E1213 01:16:55.698810 2260 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:55.761366 containerd[1459]: time="2024-12-13T01:16:55.760482352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:16:55.761538 containerd[1459]: time="2024-12-13T01:16:55.761043838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:16:55.761538 containerd[1459]: time="2024-12-13T01:16:55.761129890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:16:55.761538 containerd[1459]: time="2024-12-13T01:16:55.761157829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:16:55.761538 containerd[1459]: time="2024-12-13T01:16:55.761298348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:16:55.761796 containerd[1459]: time="2024-12-13T01:16:55.761536660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:16:55.761796 containerd[1459]: time="2024-12-13T01:16:55.761624475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:16:55.763646 containerd[1459]: time="2024-12-13T01:16:55.762139625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:16:55.763646 containerd[1459]: time="2024-12-13T01:16:55.762252904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:16:55.763646 containerd[1459]: time="2024-12-13T01:16:55.762281667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:16:55.763646 containerd[1459]: time="2024-12-13T01:16:55.762412943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:16:55.764156 containerd[1459]: time="2024-12-13T01:16:55.764014539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:16:55.808452 systemd[1]: Started cri-containerd-230bcc5173f359f4c8a5e41826d1959836edaaccba660ccb5832af37df02001c.scope - libcontainer container 230bcc5173f359f4c8a5e41826d1959836edaaccba660ccb5832af37df02001c. Dec 13 01:16:55.824457 systemd[1]: Started cri-containerd-67892230ce2d301ec8b7c14fe4ebc0dfb60c7bf982d6f13baebb0688e3a0e597.scope - libcontainer container 67892230ce2d301ec8b7c14fe4ebc0dfb60c7bf982d6f13baebb0688e3a0e597. Dec 13 01:16:55.827277 systemd[1]: Started cri-containerd-d9e6a9b7a09a609074f014f7e2de583705af5bb76ae53e07e38a0fa278fdff8e.scope - libcontainer container d9e6a9b7a09a609074f014f7e2de583705af5bb76ae53e07e38a0fa278fdff8e. Dec 13 01:16:55.915713 containerd[1459]: time="2024-12-13T01:16:55.915660185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,Uid:0d735a90ad5c0ecd99a146e85c8e4f82,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9e6a9b7a09a609074f014f7e2de583705af5bb76ae53e07e38a0fa278fdff8e\"" Dec 13 01:16:55.919694 kubelet[2260]: E1213 01:16:55.919650 2260 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-21291" Dec 13 01:16:55.924363 containerd[1459]: time="2024-12-13T01:16:55.924315389Z" level=info msg="CreateContainer within sandbox \"d9e6a9b7a09a609074f014f7e2de583705af5bb76ae53e07e38a0fa278fdff8e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:16:55.932243 containerd[1459]: time="2024-12-13T01:16:55.932096804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,Uid:aa62abe2f4abddb46b422ad671321a0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"230bcc5173f359f4c8a5e41826d1959836edaaccba660ccb5832af37df02001c\"" Dec 13 01:16:55.935641 kubelet[2260]: E1213 01:16:55.935596 2260 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-21291" Dec 13 01:16:55.943113 containerd[1459]: time="2024-12-13T01:16:55.942954879Z" level=info msg="CreateContainer within sandbox \"230bcc5173f359f4c8a5e41826d1959836edaaccba660ccb5832af37df02001c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:16:55.953567 kubelet[2260]: W1213 01:16:55.953425 2260 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:55.953567 kubelet[2260]: E1213 01:16:55.953531 2260 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:55.965019 containerd[1459]: time="2024-12-13T01:16:55.964550952Z" level=info msg="CreateContainer within sandbox \"d9e6a9b7a09a609074f014f7e2de583705af5bb76ae53e07e38a0fa278fdff8e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f969082724b6047d86792d339b282f73080867489989b9547cc729105018bd49\"" Dec 13 01:16:55.966664 containerd[1459]: time="2024-12-13T01:16:55.966627266Z" level=info msg="StartContainer for \"f969082724b6047d86792d339b282f73080867489989b9547cc729105018bd49\"" Dec 13 01:16:55.967392 containerd[1459]: time="2024-12-13T01:16:55.967356732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal,Uid:aaa6ac15d36c13ceb6816ed3f59189ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"67892230ce2d301ec8b7c14fe4ebc0dfb60c7bf982d6f13baebb0688e3a0e597\"" Dec 13 01:16:55.970231 kubelet[2260]: E1213 01:16:55.970172 2260 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flat" Dec 13 01:16:55.974585 containerd[1459]: time="2024-12-13T01:16:55.974546996Z" level=info msg="CreateContainer within sandbox \"67892230ce2d301ec8b7c14fe4ebc0dfb60c7bf982d6f13baebb0688e3a0e597\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:16:55.983149 containerd[1459]: time="2024-12-13T01:16:55.983092422Z" level=info msg="CreateContainer within sandbox \"230bcc5173f359f4c8a5e41826d1959836edaaccba660ccb5832af37df02001c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2e16d70049531143f532d37cb6ae53a70d4d86460d16ea891ba2a6ba509f3f89\"" Dec 13 01:16:55.984292 containerd[1459]: time="2024-12-13T01:16:55.983771941Z" level=info msg="StartContainer for \"2e16d70049531143f532d37cb6ae53a70d4d86460d16ea891ba2a6ba509f3f89\"" Dec 13 01:16:56.007408 containerd[1459]: time="2024-12-13T01:16:56.007354386Z" level=info msg="CreateContainer within sandbox \"67892230ce2d301ec8b7c14fe4ebc0dfb60c7bf982d6f13baebb0688e3a0e597\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7f9bdc46a4f1a9c2afdfcd9069d88967a1d5753bd184088df4502659c1f982c7\"" Dec 13 01:16:56.008931 containerd[1459]: time="2024-12-13T01:16:56.008899752Z" level=info msg="StartContainer for \"7f9bdc46a4f1a9c2afdfcd9069d88967a1d5753bd184088df4502659c1f982c7\"" Dec 13 01:16:56.020657 systemd[1]: Started cri-containerd-f969082724b6047d86792d339b282f73080867489989b9547cc729105018bd49.scope - libcontainer container f969082724b6047d86792d339b282f73080867489989b9547cc729105018bd49. Dec 13 01:16:56.041647 kubelet[2260]: E1213 01:16:56.041563 2260 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.51:6443: connect: connection refused" interval="1.6s" Dec 13 01:16:56.044461 systemd[1]: Started cri-containerd-2e16d70049531143f532d37cb6ae53a70d4d86460d16ea891ba2a6ba509f3f89.scope - libcontainer container 2e16d70049531143f532d37cb6ae53a70d4d86460d16ea891ba2a6ba509f3f89. Dec 13 01:16:56.080436 systemd[1]: Started cri-containerd-7f9bdc46a4f1a9c2afdfcd9069d88967a1d5753bd184088df4502659c1f982c7.scope - libcontainer container 7f9bdc46a4f1a9c2afdfcd9069d88967a1d5753bd184088df4502659c1f982c7. Dec 13 01:16:56.133165 kubelet[2260]: W1213 01:16:56.133071 2260 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:56.133165 kubelet[2260]: E1213 01:16:56.133169 2260 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Dec 13 01:16:56.143274 containerd[1459]: time="2024-12-13T01:16:56.141267750Z" level=info msg="StartContainer for \"f969082724b6047d86792d339b282f73080867489989b9547cc729105018bd49\" returns successfully" Dec 13 01:16:56.166281 kubelet[2260]: I1213 01:16:56.166171 2260 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:56.167030 kubelet[2260]: E1213 01:16:56.166991 2260 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.51:6443/api/v1/nodes\": dial tcp 10.128.0.51:6443: connect: connection refused" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:56.167693 containerd[1459]: time="2024-12-13T01:16:56.167652897Z" level=info msg="StartContainer for \"2e16d70049531143f532d37cb6ae53a70d4d86460d16ea891ba2a6ba509f3f89\" returns successfully" Dec 13 01:16:56.209236 containerd[1459]: time="2024-12-13T01:16:56.209128806Z" level=info msg="StartContainer for \"7f9bdc46a4f1a9c2afdfcd9069d88967a1d5753bd184088df4502659c1f982c7\" returns successfully" Dec 13 01:16:57.776171 kubelet[2260]: I1213 01:16:57.775105 2260 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:59.285683 kubelet[2260]: E1213 01:16:59.285621 2260 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" not found" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:59.385925 kubelet[2260]: I1213 01:16:59.385873 2260 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:16:59.611553 kubelet[2260]: I1213 01:16:59.611501 2260 apiserver.go:52] "Watching apiserver" Dec 13 01:16:59.638832 kubelet[2260]: I1213 01:16:59.638788 2260 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:17:01.571983 systemd[1]: Reloading requested from client PID 2539 ('systemctl') (unit session-9.scope)... Dec 13 01:17:01.572007 systemd[1]: Reloading... Dec 13 01:17:01.715319 zram_generator::config[2579]: No configuration found. Dec 13 01:17:01.875975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:17:02.004344 systemd[1]: Reloading finished in 431 ms. Dec 13 01:17:02.060412 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:02.068286 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:17:02.068575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:02.068664 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 115.5M memory peak, 0B memory swap peak. Dec 13 01:17:02.078777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:17:02.332316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:17:02.345793 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:17:02.429373 kubelet[2627]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:02.429373 kubelet[2627]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:17:02.429373 kubelet[2627]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:17:02.429957 kubelet[2627]: I1213 01:17:02.429493 2627 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:17:02.437396 kubelet[2627]: I1213 01:17:02.437348 2627 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:17:02.437396 kubelet[2627]: I1213 01:17:02.437376 2627 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:17:02.437766 kubelet[2627]: I1213 01:17:02.437729 2627 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:17:02.440321 kubelet[2627]: I1213 01:17:02.440274 2627 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:17:02.443284 kubelet[2627]: I1213 01:17:02.442671 2627 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:17:02.458024 kubelet[2627]: I1213 01:17:02.457976 2627 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:17:02.458462 kubelet[2627]: I1213 01:17:02.458412 2627 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:17:02.458737 kubelet[2627]: I1213 01:17:02.458463 2627 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:17:02.458933 kubelet[2627]: I1213 01:17:02.458746 2627 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:17:02.458933 kubelet[2627]: I1213 01:17:02.458768 2627 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:17:02.458933 kubelet[2627]: I1213 01:17:02.458841 2627 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:02.459087 kubelet[2627]: I1213 01:17:02.458991 2627 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:17:02.459087 kubelet[2627]: I1213 01:17:02.459012 2627 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:17:02.459188 kubelet[2627]: I1213 01:17:02.459098 2627 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:17:02.459188 kubelet[2627]: I1213 01:17:02.459129 2627 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:17:02.462644 kubelet[2627]: I1213 01:17:02.461728 2627 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:17:02.462644 kubelet[2627]: I1213 01:17:02.462003 2627 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:17:02.464191 kubelet[2627]: I1213 01:17:02.463756 2627 server.go:1264] "Started kubelet" Dec 13 01:17:02.469129 kubelet[2627]: I1213 01:17:02.468193 2627 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:17:02.469129 kubelet[2627]: I1213 01:17:02.468653 2627 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:17:02.469129 kubelet[2627]: I1213 01:17:02.468702 2627 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:17:02.471239 kubelet[2627]: I1213 01:17:02.470332 2627 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:17:02.475063 kubelet[2627]: I1213 01:17:02.473736 2627 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:17:02.492370 kubelet[2627]: I1213 01:17:02.489470 2627 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:17:02.492370 kubelet[2627]: I1213 01:17:02.489974 2627 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:17:02.492370 kubelet[2627]: I1213 01:17:02.490182 2627 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:17:02.501079 kubelet[2627]: I1213 01:17:02.501019 2627 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:17:02.501901 kubelet[2627]: I1213 01:17:02.501868 2627 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:17:02.504613 kubelet[2627]: I1213 01:17:02.504589 2627 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:17:02.504849 kubelet[2627]: I1213 01:17:02.504833 2627 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:17:02.505007 kubelet[2627]: I1213 01:17:02.504969 2627 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:17:02.505296 kubelet[2627]: E1213 01:17:02.505269 2627 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:17:02.511435 kubelet[2627]: I1213 01:17:02.511402 2627 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:17:02.511435 kubelet[2627]: I1213 01:17:02.511431 2627 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:17:02.520049 kubelet[2627]: E1213 01:17:02.519996 2627 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:17:02.596914 sudo[2657]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:17:02.598256 sudo[2657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:17:02.605230 kubelet[2627]: I1213 01:17:02.602675 2627 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.609970 kubelet[2627]: E1213 01:17:02.605589 2627 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:17:02.618483 kubelet[2627]: I1213 01:17:02.617742 2627 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.619162 kubelet[2627]: I1213 01:17:02.619135 2627 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.646537 kubelet[2627]: I1213 01:17:02.646492 2627 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:17:02.646537 kubelet[2627]: I1213 01:17:02.646535 2627 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:17:02.646746 kubelet[2627]: I1213 01:17:02.646563 2627 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:17:02.646814 kubelet[2627]: I1213 01:17:02.646775 2627 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:17:02.646814 kubelet[2627]: I1213 01:17:02.646791 2627 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:17:02.646910 kubelet[2627]: I1213 01:17:02.646820 2627 policy_none.go:49] "None policy: Start" Dec 13 01:17:02.652815 kubelet[2627]: I1213 01:17:02.651497 2627 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:17:02.652815 kubelet[2627]: I1213 01:17:02.651580 2627 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:17:02.652815 kubelet[2627]: I1213 01:17:02.651930 2627 state_mem.go:75] "Updated machine memory state" Dec 13 01:17:02.659353 kubelet[2627]: I1213 01:17:02.659321 2627 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:17:02.660080 kubelet[2627]: I1213 01:17:02.659551 2627 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:17:02.660780 kubelet[2627]: I1213 01:17:02.660646 2627 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:17:02.806399 kubelet[2627]: I1213 01:17:02.806316 2627 topology_manager.go:215] "Topology Admit Handler" podUID="aaa6ac15d36c13ceb6816ed3f59189ab" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.806580 kubelet[2627]: I1213 01:17:02.806482 2627 topology_manager.go:215] "Topology Admit Handler" podUID="0d735a90ad5c0ecd99a146e85c8e4f82" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.806658 kubelet[2627]: I1213 01:17:02.806613 2627 topology_manager.go:215] "Topology Admit Handler" podUID="aa62abe2f4abddb46b422ad671321a0a" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.833981 kubelet[2627]: W1213 01:17:02.833499 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:17:02.840367 kubelet[2627]: W1213 01:17:02.839805 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:17:02.842258 kubelet[2627]: W1213 01:17:02.841603 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:17:02.893317 kubelet[2627]: I1213 01:17:02.891926 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.893317 kubelet[2627]: I1213 01:17:02.891986 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.893317 kubelet[2627]: I1213 01:17:02.892024 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d735a90ad5c0ecd99a146e85c8e4f82-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"0d735a90ad5c0ecd99a146e85c8e4f82\") " pod="kube-system/kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.893317 kubelet[2627]: I1213 01:17:02.892053 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa62abe2f4abddb46b422ad671321a0a-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aa62abe2f4abddb46b422ad671321a0a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.893611 kubelet[2627]: I1213 01:17:02.892089 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa62abe2f4abddb46b422ad671321a0a-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aa62abe2f4abddb46b422ad671321a0a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.893611 kubelet[2627]: I1213 01:17:02.892119 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa62abe2f4abddb46b422ad671321a0a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aa62abe2f4abddb46b422ad671321a0a\") " pod="kube-system/kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.893611 kubelet[2627]: I1213 01:17:02.892148 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.893611 kubelet[2627]: I1213 01:17:02.892179 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:02.893822 kubelet[2627]: I1213 01:17:02.892343 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaa6ac15d36c13ceb6816ed3f59189ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" (UID: \"aaa6ac15d36c13ceb6816ed3f59189ab\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" Dec 13 01:17:03.382585 sudo[2657]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:03.461240 kubelet[2627]: I1213 01:17:03.460236 2627 apiserver.go:52] "Watching apiserver" Dec 13 01:17:03.491182 kubelet[2627]: I1213 01:17:03.491133 2627 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:17:03.615085 kubelet[2627]: I1213 01:17:03.614604 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" podStartSLOduration=1.614576201 podStartE2EDuration="1.614576201s" podCreationTimestamp="2024-12-13 01:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:03.60088868 +0000 UTC m=+1.248624607" watchObservedRunningTime="2024-12-13 01:17:03.614576201 +0000 UTC m=+1.262312124" Dec 13 01:17:03.627056 kubelet[2627]: I1213 01:17:03.626687 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" podStartSLOduration=1.626661863 podStartE2EDuration="1.626661863s" podCreationTimestamp="2024-12-13 01:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:03.615770293 +0000 UTC m=+1.263506219" watchObservedRunningTime="2024-12-13 01:17:03.626661863 +0000 UTC m=+1.274397793" Dec 13 01:17:03.644262 kubelet[2627]: I1213 01:17:03.643600 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" podStartSLOduration=1.6435766429999998 podStartE2EDuration="1.643576643s" podCreationTimestamp="2024-12-13 01:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:03.627735925 +0000 UTC m=+1.275471848" watchObservedRunningTime="2024-12-13 01:17:03.643576643 +0000 UTC m=+1.291312570" Dec 13 01:17:04.227261 update_engine[1449]: I20241213 01:17:04.226264 1449 update_attempter.cc:509] Updating boot flags... Dec 13 01:17:04.329272 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2677) Dec 13 01:17:04.556230 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2679) Dec 13 01:17:06.530530 sudo[1737]: pam_unix(sudo:session): session closed for user root Dec 13 01:17:06.573542 sshd[1734]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:06.581755 systemd[1]: sshd@8-10.128.0.51:22-147.75.109.163:40434.service: Deactivated successfully. Dec 13 01:17:06.584880 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:17:06.585261 systemd[1]: session-9.scope: Consumed 7.178s CPU time, 191.0M memory peak, 0B memory swap peak. Dec 13 01:17:06.586193 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:17:06.587848 systemd-logind[1445]: Removed session 9. Dec 13 01:17:15.840614 kubelet[2627]: I1213 01:17:15.840552 2627 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:17:15.841727 containerd[1459]: time="2024-12-13T01:17:15.841335093Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:17:15.842814 kubelet[2627]: I1213 01:17:15.842025 2627 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:17:16.126344 kubelet[2627]: I1213 01:17:16.123691 2627 topology_manager.go:215] "Topology Admit Handler" podUID="355a8fda-f20a-4368-9bba-65b246e41170" podNamespace="kube-system" podName="kube-proxy-gctz7" Dec 13 01:17:16.137668 kubelet[2627]: W1213 01:17:16.137487 2627 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal' and this object Dec 13 01:17:16.137668 kubelet[2627]: E1213 01:17:16.137552 2627 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal' and this object Dec 13 01:17:16.137668 kubelet[2627]: W1213 01:17:16.137602 2627 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal' and this object Dec 13 01:17:16.137668 kubelet[2627]: E1213 01:17:16.137620 2627 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal' and this object Dec 13 01:17:16.143349 systemd[1]: Created slice kubepods-besteffort-pod355a8fda_f20a_4368_9bba_65b246e41170.slice - libcontainer container kubepods-besteffort-pod355a8fda_f20a_4368_9bba_65b246e41170.slice. Dec 13 01:17:16.146017 kubelet[2627]: I1213 01:17:16.144262 2627 topology_manager.go:215] "Topology Admit Handler" podUID="97ece0ab-6dbb-496d-b3b2-98ee98939f74" podNamespace="kube-system" podName="cilium-khqf7" Dec 13 01:17:16.175775 systemd[1]: Created slice kubepods-burstable-pod97ece0ab_6dbb_496d_b3b2_98ee98939f74.slice - libcontainer container kubepods-burstable-pod97ece0ab_6dbb_496d_b3b2_98ee98939f74.slice. Dec 13 01:17:16.176906 kubelet[2627]: W1213 01:17:16.176090 2627 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal' and this object Dec 13 01:17:16.176906 kubelet[2627]: E1213 01:17:16.176131 2627 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal' and this object Dec 13 01:17:16.276345 kubelet[2627]: I1213 01:17:16.276291 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/355a8fda-f20a-4368-9bba-65b246e41170-kube-proxy\") pod \"kube-proxy-gctz7\" (UID: \"355a8fda-f20a-4368-9bba-65b246e41170\") " pod="kube-system/kube-proxy-gctz7" Dec 13 01:17:16.276533 kubelet[2627]: I1213 01:17:16.276404 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-host-proc-sys-kernel\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.276533 kubelet[2627]: I1213 01:17:16.276451 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-hostproc\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.276533 kubelet[2627]: I1213 01:17:16.276478 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-host-proc-sys-net\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.276533 kubelet[2627]: I1213 01:17:16.276506 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/355a8fda-f20a-4368-9bba-65b246e41170-lib-modules\") pod \"kube-proxy-gctz7\" (UID: \"355a8fda-f20a-4368-9bba-65b246e41170\") " pod="kube-system/kube-proxy-gctz7" Dec 13 01:17:16.276767 kubelet[2627]: I1213 01:17:16.276542 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-run\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.276767 kubelet[2627]: I1213 01:17:16.276568 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-cgroup\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.276767 kubelet[2627]: I1213 01:17:16.276600 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-bpf-maps\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.276767 kubelet[2627]: I1213 01:17:16.276629 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gb68\" (UniqueName: \"kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-kube-api-access-4gb68\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.276767 kubelet[2627]: I1213 01:17:16.276678 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-etc-cni-netd\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.276767 kubelet[2627]: I1213 01:17:16.276706 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97ece0ab-6dbb-496d-b3b2-98ee98939f74-clustermesh-secrets\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.277088 kubelet[2627]: I1213 01:17:16.276733 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-config-path\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.277088 kubelet[2627]: I1213 01:17:16.276765 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvhfm\" (UniqueName: \"kubernetes.io/projected/355a8fda-f20a-4368-9bba-65b246e41170-kube-api-access-cvhfm\") pod \"kube-proxy-gctz7\" (UID: \"355a8fda-f20a-4368-9bba-65b246e41170\") " pod="kube-system/kube-proxy-gctz7" Dec 13 01:17:16.277088 kubelet[2627]: I1213 01:17:16.276791 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cni-path\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.277088 kubelet[2627]: I1213 01:17:16.276819 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/355a8fda-f20a-4368-9bba-65b246e41170-xtables-lock\") pod \"kube-proxy-gctz7\" (UID: \"355a8fda-f20a-4368-9bba-65b246e41170\") " pod="kube-system/kube-proxy-gctz7" Dec 13 01:17:16.277088 kubelet[2627]: I1213 01:17:16.276844 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-lib-modules\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.277088 kubelet[2627]: I1213 01:17:16.276879 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-xtables-lock\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.277368 kubelet[2627]: I1213 01:17:16.276908 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-hubble-tls\") pod \"cilium-khqf7\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " pod="kube-system/cilium-khqf7" Dec 13 01:17:16.514619 kubelet[2627]: I1213 01:17:16.514484 2627 topology_manager.go:215] "Topology Admit Handler" podUID="27f2dc47-d491-48ca-ba91-1a9aba4abdd7" podNamespace="kube-system" podName="cilium-operator-599987898-s49kj" Dec 13 01:17:16.533564 systemd[1]: Created slice kubepods-besteffort-pod27f2dc47_d491_48ca_ba91_1a9aba4abdd7.slice - libcontainer container kubepods-besteffort-pod27f2dc47_d491_48ca_ba91_1a9aba4abdd7.slice. Dec 13 01:17:16.682271 kubelet[2627]: I1213 01:17:16.682158 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27f2dc47-d491-48ca-ba91-1a9aba4abdd7-cilium-config-path\") pod \"cilium-operator-599987898-s49kj\" (UID: \"27f2dc47-d491-48ca-ba91-1a9aba4abdd7\") " pod="kube-system/cilium-operator-599987898-s49kj" Dec 13 01:17:16.682511 kubelet[2627]: I1213 01:17:16.682292 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhzfn\" (UniqueName: \"kubernetes.io/projected/27f2dc47-d491-48ca-ba91-1a9aba4abdd7-kube-api-access-qhzfn\") pod \"cilium-operator-599987898-s49kj\" (UID: \"27f2dc47-d491-48ca-ba91-1a9aba4abdd7\") " pod="kube-system/cilium-operator-599987898-s49kj" Dec 13 01:17:17.409563 kubelet[2627]: E1213 01:17:17.409492 2627 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:17:17.409563 kubelet[2627]: E1213 01:17:17.409550 2627 projected.go:200] Error preparing data for projected volume kube-api-access-cvhfm for pod kube-system/kube-proxy-gctz7: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:17:17.410294 kubelet[2627]: E1213 01:17:17.409672 2627 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/355a8fda-f20a-4368-9bba-65b246e41170-kube-api-access-cvhfm podName:355a8fda-f20a-4368-9bba-65b246e41170 nodeName:}" failed. No retries permitted until 2024-12-13 01:17:17.909639916 +0000 UTC m=+15.557375845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cvhfm" (UniqueName: "kubernetes.io/projected/355a8fda-f20a-4368-9bba-65b246e41170-kube-api-access-cvhfm") pod "kube-proxy-gctz7" (UID: "355a8fda-f20a-4368-9bba-65b246e41170") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:17:17.415106 kubelet[2627]: E1213 01:17:17.414899 2627 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:17:17.415106 kubelet[2627]: E1213 01:17:17.414964 2627 projected.go:200] Error preparing data for projected volume kube-api-access-4gb68 for pod kube-system/cilium-khqf7: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:17:17.415106 kubelet[2627]: E1213 01:17:17.415049 2627 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-kube-api-access-4gb68 podName:97ece0ab-6dbb-496d-b3b2-98ee98939f74 nodeName:}" failed. No retries permitted until 2024-12-13 01:17:17.915024934 +0000 UTC m=+15.562760842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4gb68" (UniqueName: "kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-kube-api-access-4gb68") pod "cilium-khqf7" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:17:17.740476 containerd[1459]: time="2024-12-13T01:17:17.739805476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s49kj,Uid:27f2dc47-d491-48ca-ba91-1a9aba4abdd7,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:17.783860 containerd[1459]: time="2024-12-13T01:17:17.782766312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:17.784411 containerd[1459]: time="2024-12-13T01:17:17.784118462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:17.784411 containerd[1459]: time="2024-12-13T01:17:17.784151974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:17.784660 containerd[1459]: time="2024-12-13T01:17:17.784371192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:17.818671 systemd[1]: run-containerd-runc-k8s.io-96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b-runc.8AqP8p.mount: Deactivated successfully. Dec 13 01:17:17.830492 systemd[1]: Started cri-containerd-96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b.scope - libcontainer container 96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b. Dec 13 01:17:17.886798 containerd[1459]: time="2024-12-13T01:17:17.886745735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s49kj,Uid:27f2dc47-d491-48ca-ba91-1a9aba4abdd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\"" Dec 13 01:17:17.890280 containerd[1459]: time="2024-12-13T01:17:17.890232699Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:17:18.263685 containerd[1459]: time="2024-12-13T01:17:18.262953998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gctz7,Uid:355a8fda-f20a-4368-9bba-65b246e41170,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:18.287830 containerd[1459]: time="2024-12-13T01:17:18.286842085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khqf7,Uid:97ece0ab-6dbb-496d-b3b2-98ee98939f74,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:18.304342 containerd[1459]: time="2024-12-13T01:17:18.303942715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:18.304342 containerd[1459]: time="2024-12-13T01:17:18.304033806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:18.304342 containerd[1459]: time="2024-12-13T01:17:18.304061819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:18.306846 containerd[1459]: time="2024-12-13T01:17:18.306712755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:18.338502 systemd[1]: Started cri-containerd-d252fdf83f9cc4453961c8c42f6b9c5382be15f76fbb8ea3429e2d2eb11b1068.scope - libcontainer container d252fdf83f9cc4453961c8c42f6b9c5382be15f76fbb8ea3429e2d2eb11b1068. Dec 13 01:17:18.338734 containerd[1459]: time="2024-12-13T01:17:18.338378978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:18.338734 containerd[1459]: time="2024-12-13T01:17:18.338476922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:18.338734 containerd[1459]: time="2024-12-13T01:17:18.338503043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:18.338734 containerd[1459]: time="2024-12-13T01:17:18.338628698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:18.379528 systemd[1]: Started cri-containerd-939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97.scope - libcontainer container 939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97. Dec 13 01:17:18.416517 containerd[1459]: time="2024-12-13T01:17:18.416265343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gctz7,Uid:355a8fda-f20a-4368-9bba-65b246e41170,Namespace:kube-system,Attempt:0,} returns sandbox id \"d252fdf83f9cc4453961c8c42f6b9c5382be15f76fbb8ea3429e2d2eb11b1068\"" Dec 13 01:17:18.428498 containerd[1459]: time="2024-12-13T01:17:18.428386596Z" level=info msg="CreateContainer within sandbox \"d252fdf83f9cc4453961c8c42f6b9c5382be15f76fbb8ea3429e2d2eb11b1068\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:17:18.439641 containerd[1459]: time="2024-12-13T01:17:18.439517007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khqf7,Uid:97ece0ab-6dbb-496d-b3b2-98ee98939f74,Namespace:kube-system,Attempt:0,} returns sandbox id \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\"" Dec 13 01:17:18.457140 containerd[1459]: time="2024-12-13T01:17:18.457059632Z" level=info msg="CreateContainer within sandbox \"d252fdf83f9cc4453961c8c42f6b9c5382be15f76fbb8ea3429e2d2eb11b1068\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b4ffffdbce345e6c01472eeeb1f2a5d022b8134be30b2d34e063767345ed819\"" Dec 13 01:17:18.459069 containerd[1459]: time="2024-12-13T01:17:18.458998682Z" level=info msg="StartContainer for \"8b4ffffdbce345e6c01472eeeb1f2a5d022b8134be30b2d34e063767345ed819\"" Dec 13 01:17:18.504441 systemd[1]: Started cri-containerd-8b4ffffdbce345e6c01472eeeb1f2a5d022b8134be30b2d34e063767345ed819.scope - libcontainer container 8b4ffffdbce345e6c01472eeeb1f2a5d022b8134be30b2d34e063767345ed819. Dec 13 01:17:18.547706 containerd[1459]: time="2024-12-13T01:17:18.547516360Z" level=info msg="StartContainer for \"8b4ffffdbce345e6c01472eeeb1f2a5d022b8134be30b2d34e063767345ed819\" returns successfully" Dec 13 01:17:18.626977 kubelet[2627]: I1213 01:17:18.626902 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gctz7" podStartSLOduration=2.626862612 podStartE2EDuration="2.626862612s" podCreationTimestamp="2024-12-13 01:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:18.626431637 +0000 UTC m=+16.274167563" watchObservedRunningTime="2024-12-13 01:17:18.626862612 +0000 UTC m=+16.274598541" Dec 13 01:17:20.253149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834712933.mount: Deactivated successfully. Dec 13 01:17:20.942911 containerd[1459]: time="2024-12-13T01:17:20.942820371Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:20.944496 containerd[1459]: time="2024-12-13T01:17:20.944414873Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907213" Dec 13 01:17:20.946368 containerd[1459]: time="2024-12-13T01:17:20.946303355Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:20.949437 containerd[1459]: time="2024-12-13T01:17:20.948760836Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.058473584s" Dec 13 01:17:20.949437 containerd[1459]: time="2024-12-13T01:17:20.948812809Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:17:20.951803 containerd[1459]: time="2024-12-13T01:17:20.951553680Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:17:20.953156 containerd[1459]: time="2024-12-13T01:17:20.953106857Z" level=info msg="CreateContainer within sandbox \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:17:20.977401 containerd[1459]: time="2024-12-13T01:17:20.977341186Z" level=info msg="CreateContainer within sandbox \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\"" Dec 13 01:17:20.978964 containerd[1459]: time="2024-12-13T01:17:20.978057453Z" level=info msg="StartContainer for \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\"" Dec 13 01:17:21.023451 systemd[1]: Started cri-containerd-b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1.scope - libcontainer container b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1. Dec 13 01:17:21.058422 containerd[1459]: time="2024-12-13T01:17:21.058160190Z" level=info msg="StartContainer for \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\" returns successfully" Dec 13 01:17:26.481082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241609398.mount: Deactivated successfully. Dec 13 01:17:29.357820 containerd[1459]: time="2024-12-13T01:17:29.357745199Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:29.359701 containerd[1459]: time="2024-12-13T01:17:29.359599179Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734691" Dec 13 01:17:29.361844 containerd[1459]: time="2024-12-13T01:17:29.361756607Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:17:29.364232 containerd[1459]: time="2024-12-13T01:17:29.364029703Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.412430446s" Dec 13 01:17:29.364232 containerd[1459]: time="2024-12-13T01:17:29.364093225Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:17:29.367824 containerd[1459]: time="2024-12-13T01:17:29.367784695Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:17:29.389337 containerd[1459]: time="2024-12-13T01:17:29.388732227Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\"" Dec 13 01:17:29.391670 containerd[1459]: time="2024-12-13T01:17:29.391615928Z" level=info msg="StartContainer for \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\"" Dec 13 01:17:29.438061 systemd[1]: run-containerd-runc-k8s.io-0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1-runc.NaguXd.mount: Deactivated successfully. Dec 13 01:17:29.446465 systemd[1]: Started cri-containerd-0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1.scope - libcontainer container 0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1. Dec 13 01:17:29.484430 containerd[1459]: time="2024-12-13T01:17:29.484253992Z" level=info msg="StartContainer for \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\" returns successfully" Dec 13 01:17:29.498457 systemd[1]: cri-containerd-0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1.scope: Deactivated successfully. Dec 13 01:17:29.666986 kubelet[2627]: I1213 01:17:29.666640 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-s49kj" podStartSLOduration=10.605271689 podStartE2EDuration="13.666598536s" podCreationTimestamp="2024-12-13 01:17:16 +0000 UTC" firstStartedPulling="2024-12-13 01:17:17.889082021 +0000 UTC m=+15.536817941" lastFinishedPulling="2024-12-13 01:17:20.950408881 +0000 UTC m=+18.598144788" observedRunningTime="2024-12-13 01:17:21.759386809 +0000 UTC m=+19.407122738" watchObservedRunningTime="2024-12-13 01:17:29.666598536 +0000 UTC m=+27.314334465" Dec 13 01:17:30.386117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1-rootfs.mount: Deactivated successfully. Dec 13 01:17:31.568870 containerd[1459]: time="2024-12-13T01:17:31.568611664Z" level=info msg="shim disconnected" id=0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1 namespace=k8s.io Dec 13 01:17:31.568870 containerd[1459]: time="2024-12-13T01:17:31.568705458Z" level=warning msg="cleaning up after shim disconnected" id=0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1 namespace=k8s.io Dec 13 01:17:31.568870 containerd[1459]: time="2024-12-13T01:17:31.568724523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:31.656511 containerd[1459]: time="2024-12-13T01:17:31.656439152Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:17:31.680566 containerd[1459]: time="2024-12-13T01:17:31.680502727Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\"" Dec 13 01:17:31.681463 containerd[1459]: time="2024-12-13T01:17:31.681425405Z" level=info msg="StartContainer for \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\"" Dec 13 01:17:31.735494 systemd[1]: Started cri-containerd-0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2.scope - libcontainer container 0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2. Dec 13 01:17:31.785388 containerd[1459]: time="2024-12-13T01:17:31.785230053Z" level=info msg="StartContainer for \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\" returns successfully" Dec 13 01:17:31.800901 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:17:31.801876 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:17:31.802086 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:17:31.811351 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:17:31.811750 systemd[1]: cri-containerd-0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2.scope: Deactivated successfully. Dec 13 01:17:31.848522 containerd[1459]: time="2024-12-13T01:17:31.848182221Z" level=info msg="shim disconnected" id=0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2 namespace=k8s.io Dec 13 01:17:31.848522 containerd[1459]: time="2024-12-13T01:17:31.848282857Z" level=warning msg="cleaning up after shim disconnected" id=0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2 namespace=k8s.io Dec 13 01:17:31.848522 containerd[1459]: time="2024-12-13T01:17:31.848296798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:31.848549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2-rootfs.mount: Deactivated successfully. Dec 13 01:17:31.851448 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:17:32.661482 containerd[1459]: time="2024-12-13T01:17:32.661407365Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:17:32.698278 containerd[1459]: time="2024-12-13T01:17:32.698221203Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\"" Dec 13 01:17:32.701230 containerd[1459]: time="2024-12-13T01:17:32.699102245Z" level=info msg="StartContainer for \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\"" Dec 13 01:17:32.748511 systemd[1]: Started cri-containerd-52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab.scope - libcontainer container 52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab. Dec 13 01:17:32.792659 containerd[1459]: time="2024-12-13T01:17:32.792599271Z" level=info msg="StartContainer for \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\" returns successfully" Dec 13 01:17:32.795007 systemd[1]: cri-containerd-52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab.scope: Deactivated successfully. Dec 13 01:17:32.841708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab-rootfs.mount: Deactivated successfully. Dec 13 01:17:32.845142 containerd[1459]: time="2024-12-13T01:17:32.845067082Z" level=info msg="shim disconnected" id=52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab namespace=k8s.io Dec 13 01:17:32.845142 containerd[1459]: time="2024-12-13T01:17:32.845143435Z" level=warning msg="cleaning up after shim disconnected" id=52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab namespace=k8s.io Dec 13 01:17:32.845534 containerd[1459]: time="2024-12-13T01:17:32.845157695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:32.863163 containerd[1459]: time="2024-12-13T01:17:32.863094269Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:17:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:17:33.668763 containerd[1459]: time="2024-12-13T01:17:33.668556072Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:17:33.696751 containerd[1459]: time="2024-12-13T01:17:33.696640499Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\"" Dec 13 01:17:33.699067 containerd[1459]: time="2024-12-13T01:17:33.697422942Z" level=info msg="StartContainer for \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\"" Dec 13 01:17:33.749439 systemd[1]: run-containerd-runc-k8s.io-757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c-runc.wdFwQ5.mount: Deactivated successfully. Dec 13 01:17:33.759514 systemd[1]: Started cri-containerd-757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c.scope - libcontainer container 757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c. Dec 13 01:17:33.797876 systemd[1]: cri-containerd-757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c.scope: Deactivated successfully. Dec 13 01:17:33.803830 containerd[1459]: time="2024-12-13T01:17:33.801135085Z" level=info msg="StartContainer for \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\" returns successfully" Dec 13 01:17:33.838001 containerd[1459]: time="2024-12-13T01:17:33.837910134Z" level=info msg="shim disconnected" id=757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c namespace=k8s.io Dec 13 01:17:33.838344 containerd[1459]: time="2024-12-13T01:17:33.838308382Z" level=warning msg="cleaning up after shim disconnected" id=757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c namespace=k8s.io Dec 13 01:17:33.838344 containerd[1459]: time="2024-12-13T01:17:33.838338858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:17:34.673098 containerd[1459]: time="2024-12-13T01:17:34.672662671Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:17:34.686772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c-rootfs.mount: Deactivated successfully. Dec 13 01:17:34.708846 containerd[1459]: time="2024-12-13T01:17:34.707941873Z" level=info msg="CreateContainer within sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\"" Dec 13 01:17:34.712181 containerd[1459]: time="2024-12-13T01:17:34.709954972Z" level=info msg="StartContainer for \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\"" Dec 13 01:17:34.757422 systemd[1]: Started cri-containerd-13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00.scope - libcontainer container 13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00. Dec 13 01:17:34.797086 containerd[1459]: time="2024-12-13T01:17:34.796918994Z" level=info msg="StartContainer for \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\" returns successfully" Dec 13 01:17:34.924091 kubelet[2627]: I1213 01:17:34.923965 2627 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:17:34.962908 kubelet[2627]: I1213 01:17:34.962849 2627 topology_manager.go:215] "Topology Admit Handler" podUID="c16dca34-513b-493f-832b-17c509872528" podNamespace="kube-system" podName="coredns-7db6d8ff4d-znzl6" Dec 13 01:17:34.966943 kubelet[2627]: I1213 01:17:34.965031 2627 topology_manager.go:215] "Topology Admit Handler" podUID="46975288-889f-4882-b5b6-c7c6d2a70713" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lfktn" Dec 13 01:17:34.977318 systemd[1]: Created slice kubepods-burstable-podc16dca34_513b_493f_832b_17c509872528.slice - libcontainer container kubepods-burstable-podc16dca34_513b_493f_832b_17c509872528.slice. Dec 13 01:17:34.994049 systemd[1]: Created slice kubepods-burstable-pod46975288_889f_4882_b5b6_c7c6d2a70713.slice - libcontainer container kubepods-burstable-pod46975288_889f_4882_b5b6_c7c6d2a70713.slice. Dec 13 01:17:35.018557 kubelet[2627]: I1213 01:17:35.018326 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v56q\" (UniqueName: \"kubernetes.io/projected/46975288-889f-4882-b5b6-c7c6d2a70713-kube-api-access-4v56q\") pod \"coredns-7db6d8ff4d-lfktn\" (UID: \"46975288-889f-4882-b5b6-c7c6d2a70713\") " pod="kube-system/coredns-7db6d8ff4d-lfktn" Dec 13 01:17:35.018557 kubelet[2627]: I1213 01:17:35.018398 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46975288-889f-4882-b5b6-c7c6d2a70713-config-volume\") pod \"coredns-7db6d8ff4d-lfktn\" (UID: \"46975288-889f-4882-b5b6-c7c6d2a70713\") " pod="kube-system/coredns-7db6d8ff4d-lfktn" Dec 13 01:17:35.018557 kubelet[2627]: I1213 01:17:35.018437 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dv9x\" (UniqueName: \"kubernetes.io/projected/c16dca34-513b-493f-832b-17c509872528-kube-api-access-5dv9x\") pod \"coredns-7db6d8ff4d-znzl6\" (UID: \"c16dca34-513b-493f-832b-17c509872528\") " pod="kube-system/coredns-7db6d8ff4d-znzl6" Dec 13 01:17:35.018557 kubelet[2627]: I1213 01:17:35.018468 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c16dca34-513b-493f-832b-17c509872528-config-volume\") pod \"coredns-7db6d8ff4d-znzl6\" (UID: \"c16dca34-513b-493f-832b-17c509872528\") " pod="kube-system/coredns-7db6d8ff4d-znzl6" Dec 13 01:17:35.286083 containerd[1459]: time="2024-12-13T01:17:35.285910785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-znzl6,Uid:c16dca34-513b-493f-832b-17c509872528,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:35.304573 containerd[1459]: time="2024-12-13T01:17:35.304468661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lfktn,Uid:46975288-889f-4882-b5b6-c7c6d2a70713,Namespace:kube-system,Attempt:0,}" Dec 13 01:17:35.702897 kubelet[2627]: I1213 01:17:35.701672 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-khqf7" podStartSLOduration=8.777756458 podStartE2EDuration="19.701646549s" podCreationTimestamp="2024-12-13 01:17:16 +0000 UTC" firstStartedPulling="2024-12-13 01:17:18.441425699 +0000 UTC m=+16.089161611" lastFinishedPulling="2024-12-13 01:17:29.365315777 +0000 UTC m=+27.013051702" observedRunningTime="2024-12-13 01:17:35.697754419 +0000 UTC m=+33.345490364" watchObservedRunningTime="2024-12-13 01:17:35.701646549 +0000 UTC m=+33.349382491" Dec 13 01:17:37.096024 systemd-networkd[1371]: cilium_host: Link UP Dec 13 01:17:37.097553 systemd-networkd[1371]: cilium_net: Link UP Dec 13 01:17:37.097564 systemd-networkd[1371]: cilium_net: Gained carrier Dec 13 01:17:37.097951 systemd-networkd[1371]: cilium_host: Gained carrier Dec 13 01:17:37.253199 systemd-networkd[1371]: cilium_vxlan: Link UP Dec 13 01:17:37.253236 systemd-networkd[1371]: cilium_vxlan: Gained carrier Dec 13 01:17:37.387412 systemd-networkd[1371]: cilium_net: Gained IPv6LL Dec 13 01:17:37.538422 kernel: NET: Registered PF_ALG protocol family Dec 13 01:17:37.859833 systemd-networkd[1371]: cilium_host: Gained IPv6LL Dec 13 01:17:38.402719 systemd-networkd[1371]: lxc_health: Link UP Dec 13 01:17:38.417383 systemd-networkd[1371]: lxc_health: Gained carrier Dec 13 01:17:38.873080 systemd-networkd[1371]: lxcd40e9174a670: Link UP Dec 13 01:17:38.888229 kernel: eth0: renamed from tmpc68f1 Dec 13 01:17:38.898793 systemd-networkd[1371]: lxcd40e9174a670: Gained carrier Dec 13 01:17:38.916316 systemd-networkd[1371]: lxc3927586c73d0: Link UP Dec 13 01:17:38.930938 kernel: eth0: renamed from tmp93f6f Dec 13 01:17:38.940287 systemd-networkd[1371]: lxc3927586c73d0: Gained carrier Dec 13 01:17:38.947757 systemd-networkd[1371]: cilium_vxlan: Gained IPv6LL Dec 13 01:17:39.907461 systemd-networkd[1371]: lxc_health: Gained IPv6LL Dec 13 01:17:40.036330 systemd-networkd[1371]: lxc3927586c73d0: Gained IPv6LL Dec 13 01:17:40.547432 systemd-networkd[1371]: lxcd40e9174a670: Gained IPv6LL Dec 13 01:17:43.375723 ntpd[1428]: Listen normally on 8 cilium_host 192.168.0.49:123 Dec 13 01:17:43.376596 ntpd[1428]: 13 Dec 01:17:43 ntpd[1428]: Listen normally on 8 cilium_host 192.168.0.49:123 Dec 13 01:17:43.376596 ntpd[1428]: 13 Dec 01:17:43 ntpd[1428]: Listen normally on 9 cilium_net [fe80::645a:25ff:fe0e:6dfb%4]:123 Dec 13 01:17:43.376596 ntpd[1428]: 13 Dec 01:17:43 ntpd[1428]: Listen normally on 10 cilium_host [fe80::748e:7cff:fed7:4cdd%5]:123 Dec 13 01:17:43.375909 ntpd[1428]: Listen normally on 9 cilium_net [fe80::645a:25ff:fe0e:6dfb%4]:123 Dec 13 01:17:43.375992 ntpd[1428]: Listen normally on 10 cilium_host [fe80::748e:7cff:fed7:4cdd%5]:123 Dec 13 01:17:43.377267 ntpd[1428]: Listen normally on 11 cilium_vxlan [fe80::a40d:3aff:feea:bd4c%6]:123 Dec 13 01:17:43.377791 ntpd[1428]: 13 Dec 01:17:43 ntpd[1428]: Listen normally on 11 cilium_vxlan [fe80::a40d:3aff:feea:bd4c%6]:123 Dec 13 01:17:43.377791 ntpd[1428]: 13 Dec 01:17:43 ntpd[1428]: Listen normally on 12 lxc_health [fe80::3018:31ff:fecf:969b%8]:123 Dec 13 01:17:43.377791 ntpd[1428]: 13 Dec 01:17:43 ntpd[1428]: Listen normally on 13 lxcd40e9174a670 [fe80::809:96ff:fe18:c60b%10]:123 Dec 13 01:17:43.377791 ntpd[1428]: 13 Dec 01:17:43 ntpd[1428]: Listen normally on 14 lxc3927586c73d0 [fe80::18fb:87ff:fe56:1372%12]:123 Dec 13 01:17:43.377355 ntpd[1428]: Listen normally on 12 lxc_health [fe80::3018:31ff:fecf:969b%8]:123 Dec 13 01:17:43.377417 ntpd[1428]: Listen normally on 13 lxcd40e9174a670 [fe80::809:96ff:fe18:c60b%10]:123 Dec 13 01:17:43.377511 ntpd[1428]: Listen normally on 14 lxc3927586c73d0 [fe80::18fb:87ff:fe56:1372%12]:123 Dec 13 01:17:44.108253 containerd[1459]: time="2024-12-13T01:17:44.106498867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:44.108253 containerd[1459]: time="2024-12-13T01:17:44.106722322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:44.108253 containerd[1459]: time="2024-12-13T01:17:44.106811059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:44.108253 containerd[1459]: time="2024-12-13T01:17:44.107118308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:44.116285 containerd[1459]: time="2024-12-13T01:17:44.113431351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:17:44.116285 containerd[1459]: time="2024-12-13T01:17:44.113515850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:17:44.116285 containerd[1459]: time="2024-12-13T01:17:44.113543651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:44.116285 containerd[1459]: time="2024-12-13T01:17:44.113680191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:17:44.196447 systemd[1]: Started cri-containerd-93f6f2a96c421fbdc68c12ee4b0b2ec2bd443b97cebec262983413b6d4362907.scope - libcontainer container 93f6f2a96c421fbdc68c12ee4b0b2ec2bd443b97cebec262983413b6d4362907. Dec 13 01:17:44.204133 systemd[1]: Started cri-containerd-c68f144c2c16ef02078a06edb19780183f4263a77df92ecb7c9a4f8762b5620f.scope - libcontainer container c68f144c2c16ef02078a06edb19780183f4263a77df92ecb7c9a4f8762b5620f. Dec 13 01:17:44.320892 containerd[1459]: time="2024-12-13T01:17:44.320797544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-znzl6,Uid:c16dca34-513b-493f-832b-17c509872528,Namespace:kube-system,Attempt:0,} returns sandbox id \"c68f144c2c16ef02078a06edb19780183f4263a77df92ecb7c9a4f8762b5620f\"" Dec 13 01:17:44.331221 containerd[1459]: time="2024-12-13T01:17:44.330140739Z" level=info msg="CreateContainer within sandbox \"c68f144c2c16ef02078a06edb19780183f4263a77df92ecb7c9a4f8762b5620f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:17:44.340347 containerd[1459]: time="2024-12-13T01:17:44.340297783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lfktn,Uid:46975288-889f-4882-b5b6-c7c6d2a70713,Namespace:kube-system,Attempt:0,} returns sandbox id \"93f6f2a96c421fbdc68c12ee4b0b2ec2bd443b97cebec262983413b6d4362907\"" Dec 13 01:17:44.346488 containerd[1459]: time="2024-12-13T01:17:44.346426943Z" level=info msg="CreateContainer within sandbox \"93f6f2a96c421fbdc68c12ee4b0b2ec2bd443b97cebec262983413b6d4362907\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:17:44.370344 containerd[1459]: time="2024-12-13T01:17:44.370027805Z" level=info msg="CreateContainer within sandbox \"c68f144c2c16ef02078a06edb19780183f4263a77df92ecb7c9a4f8762b5620f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3a8891d57ca3b573fa98bb73693b37bb1712c8b4cf615c3781bfcd3a76cf84a\"" Dec 13 01:17:44.373136 containerd[1459]: time="2024-12-13T01:17:44.373088866Z" level=info msg="StartContainer for \"c3a8891d57ca3b573fa98bb73693b37bb1712c8b4cf615c3781bfcd3a76cf84a\"" Dec 13 01:17:44.377023 containerd[1459]: time="2024-12-13T01:17:44.376980792Z" level=info msg="CreateContainer within sandbox \"93f6f2a96c421fbdc68c12ee4b0b2ec2bd443b97cebec262983413b6d4362907\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"664d661fa469bf2cbd139f254235e1e28186016b0e52981c3bd71e371c0468b0\"" Dec 13 01:17:44.380140 containerd[1459]: time="2024-12-13T01:17:44.377985921Z" level=info msg="StartContainer for \"664d661fa469bf2cbd139f254235e1e28186016b0e52981c3bd71e371c0468b0\"" Dec 13 01:17:44.436522 systemd[1]: Started cri-containerd-c3a8891d57ca3b573fa98bb73693b37bb1712c8b4cf615c3781bfcd3a76cf84a.scope - libcontainer container c3a8891d57ca3b573fa98bb73693b37bb1712c8b4cf615c3781bfcd3a76cf84a. Dec 13 01:17:44.446469 systemd[1]: Started cri-containerd-664d661fa469bf2cbd139f254235e1e28186016b0e52981c3bd71e371c0468b0.scope - libcontainer container 664d661fa469bf2cbd139f254235e1e28186016b0e52981c3bd71e371c0468b0. Dec 13 01:17:44.494661 containerd[1459]: time="2024-12-13T01:17:44.494577066Z" level=info msg="StartContainer for \"c3a8891d57ca3b573fa98bb73693b37bb1712c8b4cf615c3781bfcd3a76cf84a\" returns successfully" Dec 13 01:17:44.507774 containerd[1459]: time="2024-12-13T01:17:44.507378219Z" level=info msg="StartContainer for \"664d661fa469bf2cbd139f254235e1e28186016b0e52981c3bd71e371c0468b0\" returns successfully" Dec 13 01:17:44.721186 kubelet[2627]: I1213 01:17:44.720875 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lfktn" podStartSLOduration=28.720806411 podStartE2EDuration="28.720806411s" podCreationTimestamp="2024-12-13 01:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:44.717618663 +0000 UTC m=+42.365354615" watchObservedRunningTime="2024-12-13 01:17:44.720806411 +0000 UTC m=+42.368542339" Dec 13 01:17:44.739182 kubelet[2627]: I1213 01:17:44.738750 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-znzl6" podStartSLOduration=28.738720931 podStartE2EDuration="28.738720931s" podCreationTimestamp="2024-12-13 01:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:17:44.73570114 +0000 UTC m=+42.383437069" watchObservedRunningTime="2024-12-13 01:17:44.738720931 +0000 UTC m=+42.386456862" Dec 13 01:17:47.831639 systemd[1]: Started sshd@9-10.128.0.51:22-147.75.109.163:41432.service - OpenSSH per-connection server daemon (147.75.109.163:41432). Dec 13 01:17:48.115424 sshd[4008]: Accepted publickey for core from 147.75.109.163 port 41432 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:17:48.117600 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:48.125180 systemd-logind[1445]: New session 10 of user core. Dec 13 01:17:48.130490 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:17:48.432238 sshd[4008]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:48.437257 systemd[1]: sshd@9-10.128.0.51:22-147.75.109.163:41432.service: Deactivated successfully. Dec 13 01:17:48.440796 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:17:48.443349 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:17:48.445696 systemd-logind[1445]: Removed session 10. Dec 13 01:17:53.486651 systemd[1]: Started sshd@10-10.128.0.51:22-147.75.109.163:41438.service - OpenSSH per-connection server daemon (147.75.109.163:41438). Dec 13 01:17:53.780431 sshd[4027]: Accepted publickey for core from 147.75.109.163 port 41438 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:17:53.782351 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:53.788430 systemd-logind[1445]: New session 11 of user core. Dec 13 01:17:53.796456 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:17:54.079198 sshd[4027]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:54.084048 systemd[1]: sshd@10-10.128.0.51:22-147.75.109.163:41438.service: Deactivated successfully. Dec 13 01:17:54.087501 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:17:54.090424 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:17:54.092259 systemd-logind[1445]: Removed session 11. Dec 13 01:17:59.134640 systemd[1]: Started sshd@11-10.128.0.51:22-147.75.109.163:35380.service - OpenSSH per-connection server daemon (147.75.109.163:35380). Dec 13 01:17:59.422275 sshd[4041]: Accepted publickey for core from 147.75.109.163 port 35380 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:17:59.424316 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:17:59.431078 systemd-logind[1445]: New session 12 of user core. Dec 13 01:17:59.434510 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:17:59.715455 sshd[4041]: pam_unix(sshd:session): session closed for user core Dec 13 01:17:59.722752 systemd[1]: sshd@11-10.128.0.51:22-147.75.109.163:35380.service: Deactivated successfully. Dec 13 01:17:59.726448 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:17:59.727799 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:17:59.729877 systemd-logind[1445]: Removed session 12. Dec 13 01:18:04.776633 systemd[1]: Started sshd@12-10.128.0.51:22-147.75.109.163:35388.service - OpenSSH per-connection server daemon (147.75.109.163:35388). Dec 13 01:18:05.074244 sshd[4057]: Accepted publickey for core from 147.75.109.163 port 35388 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:05.076215 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:05.081854 systemd-logind[1445]: New session 13 of user core. Dec 13 01:18:05.091490 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:18:05.372999 sshd[4057]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:05.378130 systemd[1]: sshd@12-10.128.0.51:22-147.75.109.163:35388.service: Deactivated successfully. Dec 13 01:18:05.381498 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:18:05.383814 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:18:05.385839 systemd-logind[1445]: Removed session 13. Dec 13 01:18:10.433676 systemd[1]: Started sshd@13-10.128.0.51:22-147.75.109.163:38216.service - OpenSSH per-connection server daemon (147.75.109.163:38216). Dec 13 01:18:10.721719 sshd[4070]: Accepted publickey for core from 147.75.109.163 port 38216 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:10.723706 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:10.729650 systemd-logind[1445]: New session 14 of user core. Dec 13 01:18:10.735454 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:18:11.017844 sshd[4070]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:11.023135 systemd[1]: sshd@13-10.128.0.51:22-147.75.109.163:38216.service: Deactivated successfully. Dec 13 01:18:11.026417 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:18:11.028547 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:18:11.030091 systemd-logind[1445]: Removed session 14. Dec 13 01:18:11.073669 systemd[1]: Started sshd@14-10.128.0.51:22-147.75.109.163:38224.service - OpenSSH per-connection server daemon (147.75.109.163:38224). Dec 13 01:18:11.359788 sshd[4084]: Accepted publickey for core from 147.75.109.163 port 38224 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:11.361991 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:11.369253 systemd-logind[1445]: New session 15 of user core. Dec 13 01:18:11.374560 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:18:11.722668 sshd[4084]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:11.727815 systemd[1]: sshd@14-10.128.0.51:22-147.75.109.163:38224.service: Deactivated successfully. Dec 13 01:18:11.730731 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:18:11.733256 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:18:11.734947 systemd-logind[1445]: Removed session 15. Dec 13 01:18:11.778666 systemd[1]: Started sshd@15-10.128.0.51:22-147.75.109.163:38240.service - OpenSSH per-connection server daemon (147.75.109.163:38240). Dec 13 01:18:12.062714 sshd[4095]: Accepted publickey for core from 147.75.109.163 port 38240 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:12.064712 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:12.071561 systemd-logind[1445]: New session 16 of user core. Dec 13 01:18:12.077467 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:18:12.346396 sshd[4095]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:12.351454 systemd[1]: sshd@15-10.128.0.51:22-147.75.109.163:38240.service: Deactivated successfully. Dec 13 01:18:12.354272 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:18:12.356476 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:18:12.359008 systemd-logind[1445]: Removed session 16. Dec 13 01:18:17.402714 systemd[1]: Started sshd@16-10.128.0.51:22-147.75.109.163:46818.service - OpenSSH per-connection server daemon (147.75.109.163:46818). Dec 13 01:18:17.685945 sshd[4108]: Accepted publickey for core from 147.75.109.163 port 46818 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:17.687900 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:17.693604 systemd-logind[1445]: New session 17 of user core. Dec 13 01:18:17.703582 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:18:17.969118 sshd[4108]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:17.974067 systemd[1]: sshd@16-10.128.0.51:22-147.75.109.163:46818.service: Deactivated successfully. Dec 13 01:18:17.977045 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:18:17.979306 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:18:17.981160 systemd-logind[1445]: Removed session 17. Dec 13 01:18:23.026645 systemd[1]: Started sshd@17-10.128.0.51:22-147.75.109.163:46826.service - OpenSSH per-connection server daemon (147.75.109.163:46826). Dec 13 01:18:23.311639 sshd[4122]: Accepted publickey for core from 147.75.109.163 port 46826 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:23.313579 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:23.320170 systemd-logind[1445]: New session 18 of user core. Dec 13 01:18:23.325448 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:18:23.597781 sshd[4122]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:23.603809 systemd[1]: sshd@17-10.128.0.51:22-147.75.109.163:46826.service: Deactivated successfully. Dec 13 01:18:23.606799 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:18:23.608025 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:18:23.609718 systemd-logind[1445]: Removed session 18. Dec 13 01:18:23.655618 systemd[1]: Started sshd@18-10.128.0.51:22-147.75.109.163:46830.service - OpenSSH per-connection server daemon (147.75.109.163:46830). Dec 13 01:18:23.946039 sshd[4134]: Accepted publickey for core from 147.75.109.163 port 46830 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:23.948276 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:23.955693 systemd-logind[1445]: New session 19 of user core. Dec 13 01:18:23.961511 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:18:24.327781 sshd[4134]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:24.332346 systemd[1]: sshd@18-10.128.0.51:22-147.75.109.163:46830.service: Deactivated successfully. Dec 13 01:18:24.335423 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:18:24.337781 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:18:24.339501 systemd-logind[1445]: Removed session 19. Dec 13 01:18:24.386613 systemd[1]: Started sshd@19-10.128.0.51:22-147.75.109.163:46842.service - OpenSSH per-connection server daemon (147.75.109.163:46842). Dec 13 01:18:24.676168 sshd[4145]: Accepted publickey for core from 147.75.109.163 port 46842 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:24.678336 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:24.683916 systemd-logind[1445]: New session 20 of user core. Dec 13 01:18:24.690476 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:18:26.467322 sshd[4145]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:26.475298 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:18:26.476162 systemd[1]: sshd@19-10.128.0.51:22-147.75.109.163:46842.service: Deactivated successfully. Dec 13 01:18:26.479898 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:18:26.481931 systemd-logind[1445]: Removed session 20. Dec 13 01:18:26.523632 systemd[1]: Started sshd@20-10.128.0.51:22-147.75.109.163:44134.service - OpenSSH per-connection server daemon (147.75.109.163:44134). Dec 13 01:18:26.809070 sshd[4164]: Accepted publickey for core from 147.75.109.163 port 44134 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:26.810976 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:26.818084 systemd-logind[1445]: New session 21 of user core. Dec 13 01:18:26.821417 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:18:27.220395 sshd[4164]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:27.225334 systemd[1]: sshd@20-10.128.0.51:22-147.75.109.163:44134.service: Deactivated successfully. Dec 13 01:18:27.228168 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:18:27.231181 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:18:27.232860 systemd-logind[1445]: Removed session 21. Dec 13 01:18:27.277586 systemd[1]: Started sshd@21-10.128.0.51:22-147.75.109.163:44148.service - OpenSSH per-connection server daemon (147.75.109.163:44148). Dec 13 01:18:27.571937 sshd[4175]: Accepted publickey for core from 147.75.109.163 port 44148 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:27.574077 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:27.583094 systemd-logind[1445]: New session 22 of user core. Dec 13 01:18:27.588448 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:18:27.859894 sshd[4175]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:27.864944 systemd[1]: sshd@21-10.128.0.51:22-147.75.109.163:44148.service: Deactivated successfully. Dec 13 01:18:27.867886 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:18:27.870356 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:18:27.871968 systemd-logind[1445]: Removed session 22. Dec 13 01:18:32.917770 systemd[1]: Started sshd@22-10.128.0.51:22-147.75.109.163:44152.service - OpenSSH per-connection server daemon (147.75.109.163:44152). Dec 13 01:18:33.209469 sshd[4190]: Accepted publickey for core from 147.75.109.163 port 44152 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:33.211426 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:33.218163 systemd-logind[1445]: New session 23 of user core. Dec 13 01:18:33.220476 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:18:33.505887 sshd[4190]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:33.511419 systemd[1]: sshd@22-10.128.0.51:22-147.75.109.163:44152.service: Deactivated successfully. Dec 13 01:18:33.514996 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:18:33.517389 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:18:33.519784 systemd-logind[1445]: Removed session 23. Dec 13 01:18:38.564845 systemd[1]: Started sshd@23-10.128.0.51:22-147.75.109.163:49462.service - OpenSSH per-connection server daemon (147.75.109.163:49462). Dec 13 01:18:38.864011 sshd[4203]: Accepted publickey for core from 147.75.109.163 port 49462 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:38.865996 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:38.873748 systemd-logind[1445]: New session 24 of user core. Dec 13 01:18:38.879632 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:18:39.150331 sshd[4203]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:39.156487 systemd[1]: sshd@23-10.128.0.51:22-147.75.109.163:49462.service: Deactivated successfully. Dec 13 01:18:39.160299 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:18:39.161508 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:18:39.162999 systemd-logind[1445]: Removed session 24. Dec 13 01:18:44.205632 systemd[1]: Started sshd@24-10.128.0.51:22-147.75.109.163:49470.service - OpenSSH per-connection server daemon (147.75.109.163:49470). Dec 13 01:18:44.489917 sshd[4216]: Accepted publickey for core from 147.75.109.163 port 49470 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:44.491806 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:44.498568 systemd-logind[1445]: New session 25 of user core. Dec 13 01:18:44.504462 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:18:44.772962 sshd[4216]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:44.777607 systemd[1]: sshd@24-10.128.0.51:22-147.75.109.163:49470.service: Deactivated successfully. Dec 13 01:18:44.780373 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:18:44.782625 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:18:44.784195 systemd-logind[1445]: Removed session 25. Dec 13 01:18:44.830726 systemd[1]: Started sshd@25-10.128.0.51:22-147.75.109.163:49486.service - OpenSSH per-connection server daemon (147.75.109.163:49486). Dec 13 01:18:45.113829 sshd[4229]: Accepted publickey for core from 147.75.109.163 port 49486 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:45.115999 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:45.123343 systemd-logind[1445]: New session 26 of user core. Dec 13 01:18:45.127474 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:18:47.541580 containerd[1459]: time="2024-12-13T01:18:47.541503914Z" level=info msg="StopContainer for \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\" with timeout 30 (s)" Dec 13 01:18:47.543268 containerd[1459]: time="2024-12-13T01:18:47.542784572Z" level=info msg="Stop container \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\" with signal terminated" Dec 13 01:18:47.573593 systemd[1]: cri-containerd-b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1.scope: Deactivated successfully. Dec 13 01:18:47.580574 containerd[1459]: time="2024-12-13T01:18:47.580526946Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:18:47.593959 containerd[1459]: time="2024-12-13T01:18:47.593904161Z" level=info msg="StopContainer for \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\" with timeout 2 (s)" Dec 13 01:18:47.594586 containerd[1459]: time="2024-12-13T01:18:47.594486507Z" level=info msg="Stop container \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\" with signal terminated" Dec 13 01:18:47.608279 systemd-networkd[1371]: lxc_health: Link DOWN Dec 13 01:18:47.608290 systemd-networkd[1371]: lxc_health: Lost carrier Dec 13 01:18:47.633384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1-rootfs.mount: Deactivated successfully. Dec 13 01:18:47.635780 systemd[1]: cri-containerd-13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00.scope: Deactivated successfully. Dec 13 01:18:47.636537 systemd[1]: cri-containerd-13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00.scope: Consumed 9.561s CPU time. Dec 13 01:18:47.664096 containerd[1459]: time="2024-12-13T01:18:47.663789463Z" level=info msg="shim disconnected" id=b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1 namespace=k8s.io Dec 13 01:18:47.664096 containerd[1459]: time="2024-12-13T01:18:47.663864785Z" level=warning msg="cleaning up after shim disconnected" id=b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1 namespace=k8s.io Dec 13 01:18:47.664096 containerd[1459]: time="2024-12-13T01:18:47.663882764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:47.673680 containerd[1459]: time="2024-12-13T01:18:47.673404587Z" level=info msg="shim disconnected" id=13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00 namespace=k8s.io Dec 13 01:18:47.673680 containerd[1459]: time="2024-12-13T01:18:47.673473467Z" level=warning msg="cleaning up after shim disconnected" id=13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00 namespace=k8s.io Dec 13 01:18:47.673680 containerd[1459]: time="2024-12-13T01:18:47.673489829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:47.673908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00-rootfs.mount: Deactivated successfully. Dec 13 01:18:47.696180 kubelet[2627]: E1213 01:18:47.695989 2627 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:18:47.715425 containerd[1459]: time="2024-12-13T01:18:47.715356628Z" level=info msg="StopContainer for \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\" returns successfully" Dec 13 01:18:47.716751 containerd[1459]: time="2024-12-13T01:18:47.716546104Z" level=info msg="StopPodSandbox for \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\"" Dec 13 01:18:47.716751 containerd[1459]: time="2024-12-13T01:18:47.716598210Z" level=info msg="Container to stop \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:47.721374 containerd[1459]: time="2024-12-13T01:18:47.716571015Z" level=info msg="StopContainer for \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\" returns successfully" Dec 13 01:18:47.721863 containerd[1459]: time="2024-12-13T01:18:47.721828182Z" level=info msg="StopPodSandbox for \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\"" Dec 13 01:18:47.721979 containerd[1459]: time="2024-12-13T01:18:47.721879833Z" level=info msg="Container to stop \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:47.721979 containerd[1459]: time="2024-12-13T01:18:47.721909947Z" level=info msg="Container to stop \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:47.721979 containerd[1459]: time="2024-12-13T01:18:47.721929569Z" level=info msg="Container to stop \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:47.721979 containerd[1459]: time="2024-12-13T01:18:47.721946853Z" level=info msg="Container to stop \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:47.721979 containerd[1459]: time="2024-12-13T01:18:47.721962261Z" level=info msg="Container to stop \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:18:47.725398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b-shm.mount: Deactivated successfully. Dec 13 01:18:47.733482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97-shm.mount: Deactivated successfully. Dec 13 01:18:47.740354 systemd[1]: cri-containerd-939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97.scope: Deactivated successfully. Dec 13 01:18:47.744702 systemd[1]: cri-containerd-96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b.scope: Deactivated successfully. Dec 13 01:18:47.783694 containerd[1459]: time="2024-12-13T01:18:47.783431696Z" level=info msg="shim disconnected" id=939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97 namespace=k8s.io Dec 13 01:18:47.783694 containerd[1459]: time="2024-12-13T01:18:47.783521080Z" level=warning msg="cleaning up after shim disconnected" id=939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97 namespace=k8s.io Dec 13 01:18:47.783694 containerd[1459]: time="2024-12-13T01:18:47.783537889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:47.787508 containerd[1459]: time="2024-12-13T01:18:47.785900491Z" level=info msg="shim disconnected" id=96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b namespace=k8s.io Dec 13 01:18:47.787508 containerd[1459]: time="2024-12-13T01:18:47.785975254Z" level=warning msg="cleaning up after shim disconnected" id=96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b namespace=k8s.io Dec 13 01:18:47.787508 containerd[1459]: time="2024-12-13T01:18:47.785991902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:47.814176 containerd[1459]: time="2024-12-13T01:18:47.814116779Z" level=info msg="TearDown network for sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" successfully" Dec 13 01:18:47.814176 containerd[1459]: time="2024-12-13T01:18:47.814173947Z" level=info msg="StopPodSandbox for \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" returns successfully" Dec 13 01:18:47.817483 containerd[1459]: time="2024-12-13T01:18:47.817445023Z" level=info msg="TearDown network for sandbox \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\" successfully" Dec 13 01:18:47.817727 containerd[1459]: time="2024-12-13T01:18:47.817690744Z" level=info msg="StopPodSandbox for \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\" returns successfully" Dec 13 01:18:47.859014 kubelet[2627]: I1213 01:18:47.858974 2627 scope.go:117] "RemoveContainer" containerID="13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00" Dec 13 01:18:47.861727 containerd[1459]: time="2024-12-13T01:18:47.861669603Z" level=info msg="RemoveContainer for \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\"" Dec 13 01:18:47.868400 containerd[1459]: time="2024-12-13T01:18:47.868346823Z" level=info msg="RemoveContainer for \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\" returns successfully" Dec 13 01:18:47.869806 kubelet[2627]: I1213 01:18:47.869774 2627 scope.go:117] "RemoveContainer" containerID="757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c" Dec 13 01:18:47.872139 containerd[1459]: time="2024-12-13T01:18:47.872058710Z" level=info msg="RemoveContainer for \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\"" Dec 13 01:18:47.877929 containerd[1459]: time="2024-12-13T01:18:47.877882370Z" level=info msg="RemoveContainer for \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\" returns successfully" Dec 13 01:18:47.878147 kubelet[2627]: I1213 01:18:47.878116 2627 scope.go:117] "RemoveContainer" containerID="52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab" Dec 13 01:18:47.879566 containerd[1459]: time="2024-12-13T01:18:47.879431258Z" level=info msg="RemoveContainer for \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\"" Dec 13 01:18:47.883730 containerd[1459]: time="2024-12-13T01:18:47.883664404Z" level=info msg="RemoveContainer for \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\" returns successfully" Dec 13 01:18:47.884005 kubelet[2627]: I1213 01:18:47.883970 2627 scope.go:117] "RemoveContainer" containerID="0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2" Dec 13 01:18:47.885589 containerd[1459]: time="2024-12-13T01:18:47.885539834Z" level=info msg="RemoveContainer for \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\"" Dec 13 01:18:47.890151 containerd[1459]: time="2024-12-13T01:18:47.890094894Z" level=info msg="RemoveContainer for \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\" returns successfully" Dec 13 01:18:47.890414 kubelet[2627]: I1213 01:18:47.890382 2627 scope.go:117] "RemoveContainer" containerID="0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1" Dec 13 01:18:47.892020 containerd[1459]: time="2024-12-13T01:18:47.891874985Z" level=info msg="RemoveContainer for \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\"" Dec 13 01:18:47.896253 containerd[1459]: time="2024-12-13T01:18:47.896184466Z" level=info msg="RemoveContainer for \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\" returns successfully" Dec 13 01:18:47.896454 kubelet[2627]: I1213 01:18:47.896422 2627 scope.go:117] "RemoveContainer" containerID="13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00" Dec 13 01:18:47.896757 containerd[1459]: time="2024-12-13T01:18:47.896694053Z" level=error msg="ContainerStatus for \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\": not found" Dec 13 01:18:47.896969 kubelet[2627]: E1213 01:18:47.896913 2627 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\": not found" containerID="13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00" Dec 13 01:18:47.897083 kubelet[2627]: I1213 01:18:47.896954 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00"} err="failed to get container status \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\": rpc error: code = NotFound desc = an error occurred when try to find container \"13a7a6f4f4b97e21d5b57a81d1c427816a91e01a8e556c346d48d0bcf1c2ec00\": not found" Dec 13 01:18:47.897187 kubelet[2627]: I1213 01:18:47.897089 2627 scope.go:117] "RemoveContainer" containerID="757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c" Dec 13 01:18:47.897387 containerd[1459]: time="2024-12-13T01:18:47.897350746Z" level=error msg="ContainerStatus for \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\": not found" Dec 13 01:18:47.897630 kubelet[2627]: E1213 01:18:47.897587 2627 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\": not found" containerID="757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c" Dec 13 01:18:47.897811 kubelet[2627]: I1213 01:18:47.897680 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c"} err="failed to get container status \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\": rpc error: code = NotFound desc = an error occurred when try to find container \"757a336f10fa81805a9c623057ba13f055c4bd5e35e2aea41f56f88f8b4f994c\": not found" Dec 13 01:18:47.897811 kubelet[2627]: I1213 01:18:47.897713 2627 scope.go:117] "RemoveContainer" containerID="52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab" Dec 13 01:18:47.898116 containerd[1459]: time="2024-12-13T01:18:47.897954242Z" level=error msg="ContainerStatus for \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\": not found" Dec 13 01:18:47.898196 kubelet[2627]: E1213 01:18:47.898134 2627 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\": not found" containerID="52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab" Dec 13 01:18:47.898196 kubelet[2627]: I1213 01:18:47.898167 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab"} err="failed to get container status \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"52109a98c66937a57db45fc2875599cc66996891e0e5a89b8bd4f9a51589a7ab\": not found" Dec 13 01:18:47.898196 kubelet[2627]: I1213 01:18:47.898194 2627 scope.go:117] "RemoveContainer" containerID="0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2" Dec 13 01:18:47.898545 containerd[1459]: time="2024-12-13T01:18:47.898473751Z" level=error msg="ContainerStatus for \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\": not found" Dec 13 01:18:47.898876 kubelet[2627]: E1213 01:18:47.898732 2627 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\": not found" containerID="0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2" Dec 13 01:18:47.898876 kubelet[2627]: I1213 01:18:47.898768 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2"} err="failed to get container status \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"0345fac67354ca0b89edb1560931a571701c7b25eaad1d1ff27db78de29f94f2\": not found" Dec 13 01:18:47.898876 kubelet[2627]: I1213 01:18:47.898793 2627 scope.go:117] "RemoveContainer" containerID="0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1" Dec 13 01:18:47.899102 containerd[1459]: time="2024-12-13T01:18:47.899063531Z" level=error msg="ContainerStatus for \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\": not found" Dec 13 01:18:47.899277 kubelet[2627]: E1213 01:18:47.899236 2627 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\": not found" containerID="0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1" Dec 13 01:18:47.899372 kubelet[2627]: I1213 01:18:47.899271 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1"} err="failed to get container status \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f0f9a81993b583ea00e5e2072937dc69a5e5ccf368a1554f3b9cf157c281bb1\": not found" Dec 13 01:18:47.899372 kubelet[2627]: I1213 01:18:47.899297 2627 scope.go:117] "RemoveContainer" containerID="b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1" Dec 13 01:18:47.900667 containerd[1459]: time="2024-12-13T01:18:47.900635073Z" level=info msg="RemoveContainer for \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\"" Dec 13 01:18:47.905714 containerd[1459]: time="2024-12-13T01:18:47.905666382Z" level=info msg="RemoveContainer for \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\" returns successfully" Dec 13 01:18:47.906017 kubelet[2627]: I1213 01:18:47.905936 2627 scope.go:117] "RemoveContainer" containerID="b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1" Dec 13 01:18:47.906318 containerd[1459]: time="2024-12-13T01:18:47.906261033Z" level=error msg="ContainerStatus for \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\": not found" Dec 13 01:18:47.906544 kubelet[2627]: E1213 01:18:47.906434 2627 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\": not found" containerID="b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1" Dec 13 01:18:47.906544 kubelet[2627]: I1213 01:18:47.906468 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1"} err="failed to get container status \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b424712cd2d27d6ec587a7c0f97a3412e04da2821efe949e448a4845000e0cb1\": not found" Dec 13 01:18:47.918237 kubelet[2627]: I1213 01:18:47.918039 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-xtables-lock\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918237 kubelet[2627]: I1213 01:18:47.918123 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-host-proc-sys-kernel\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918237 kubelet[2627]: I1213 01:18:47.918150 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-hostproc\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918237 kubelet[2627]: I1213 01:18:47.918156 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.918237 kubelet[2627]: I1213 01:18:47.918185 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gb68\" (UniqueName: \"kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-kube-api-access-4gb68\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918237 kubelet[2627]: I1213 01:18:47.918227 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-run\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918855 kubelet[2627]: I1213 01:18:47.918254 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97ece0ab-6dbb-496d-b3b2-98ee98939f74-clustermesh-secrets\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918855 kubelet[2627]: I1213 01:18:47.918285 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-config-path\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918855 kubelet[2627]: I1213 01:18:47.918327 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27f2dc47-d491-48ca-ba91-1a9aba4abdd7-cilium-config-path\") pod \"27f2dc47-d491-48ca-ba91-1a9aba4abdd7\" (UID: \"27f2dc47-d491-48ca-ba91-1a9aba4abdd7\") " Dec 13 01:18:47.918855 kubelet[2627]: I1213 01:18:47.918355 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-bpf-maps\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918855 kubelet[2627]: I1213 01:18:47.918377 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-etc-cni-netd\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.918855 kubelet[2627]: I1213 01:18:47.918405 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhzfn\" (UniqueName: \"kubernetes.io/projected/27f2dc47-d491-48ca-ba91-1a9aba4abdd7-kube-api-access-qhzfn\") pod \"27f2dc47-d491-48ca-ba91-1a9aba4abdd7\" (UID: \"27f2dc47-d491-48ca-ba91-1a9aba4abdd7\") " Dec 13 01:18:47.919246 kubelet[2627]: I1213 01:18:47.918431 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-hubble-tls\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.919246 kubelet[2627]: I1213 01:18:47.918457 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cni-path\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.919246 kubelet[2627]: I1213 01:18:47.918483 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-lib-modules\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.919246 kubelet[2627]: I1213 01:18:47.918511 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-host-proc-sys-net\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.919246 kubelet[2627]: I1213 01:18:47.918538 2627 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-cgroup\") pod \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\" (UID: \"97ece0ab-6dbb-496d-b3b2-98ee98939f74\") " Dec 13 01:18:47.919246 kubelet[2627]: I1213 01:18:47.918598 2627 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-xtables-lock\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:47.919554 kubelet[2627]: I1213 01:18:47.918643 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.919554 kubelet[2627]: I1213 01:18:47.918690 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.919554 kubelet[2627]: I1213 01:18:47.918836 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-hostproc" (OuterVolumeSpecName: "hostproc") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.920982 kubelet[2627]: I1213 01:18:47.919780 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.920982 kubelet[2627]: I1213 01:18:47.919854 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.924528 kubelet[2627]: I1213 01:18:47.924361 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.925067 kubelet[2627]: I1213 01:18:47.925012 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cni-path" (OuterVolumeSpecName: "cni-path") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.925290 kubelet[2627]: I1213 01:18:47.925165 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.925636 kubelet[2627]: I1213 01:18:47.925398 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:18:47.932064 kubelet[2627]: I1213 01:18:47.931918 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-kube-api-access-4gb68" (OuterVolumeSpecName: "kube-api-access-4gb68") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "kube-api-access-4gb68". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:47.933338 kubelet[2627]: I1213 01:18:47.933269 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:18:47.934667 kubelet[2627]: I1213 01:18:47.934378 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ece0ab-6dbb-496d-b3b2-98ee98939f74-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:18:47.934988 kubelet[2627]: I1213 01:18:47.934937 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "97ece0ab-6dbb-496d-b3b2-98ee98939f74" (UID: "97ece0ab-6dbb-496d-b3b2-98ee98939f74"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:47.936763 kubelet[2627]: I1213 01:18:47.936715 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f2dc47-d491-48ca-ba91-1a9aba4abdd7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27f2dc47-d491-48ca-ba91-1a9aba4abdd7" (UID: "27f2dc47-d491-48ca-ba91-1a9aba4abdd7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:18:47.937418 kubelet[2627]: I1213 01:18:47.937386 2627 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f2dc47-d491-48ca-ba91-1a9aba4abdd7-kube-api-access-qhzfn" (OuterVolumeSpecName: "kube-api-access-qhzfn") pod "27f2dc47-d491-48ca-ba91-1a9aba4abdd7" (UID: "27f2dc47-d491-48ca-ba91-1a9aba4abdd7"). InnerVolumeSpecName "kube-api-access-qhzfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:18:48.019426 kubelet[2627]: I1213 01:18:48.019357 2627 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-run\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019426 kubelet[2627]: I1213 01:18:48.019402 2627 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/97ece0ab-6dbb-496d-b3b2-98ee98939f74-clustermesh-secrets\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019426 kubelet[2627]: I1213 01:18:48.019421 2627 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-config-path\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019426 kubelet[2627]: I1213 01:18:48.019435 2627 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27f2dc47-d491-48ca-ba91-1a9aba4abdd7-cilium-config-path\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019767 kubelet[2627]: I1213 01:18:48.019455 2627 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qhzfn\" (UniqueName: \"kubernetes.io/projected/27f2dc47-d491-48ca-ba91-1a9aba4abdd7-kube-api-access-qhzfn\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019767 kubelet[2627]: I1213 01:18:48.019474 2627 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-bpf-maps\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019767 kubelet[2627]: I1213 01:18:48.019487 2627 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-etc-cni-netd\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019767 kubelet[2627]: I1213 01:18:48.019500 2627 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-hubble-tls\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019767 kubelet[2627]: I1213 01:18:48.019513 2627 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cni-path\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019767 kubelet[2627]: I1213 01:18:48.019526 2627 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-lib-modules\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019767 kubelet[2627]: I1213 01:18:48.019541 2627 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-host-proc-sys-net\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019996 kubelet[2627]: I1213 01:18:48.019558 2627 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-cilium-cgroup\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019996 kubelet[2627]: I1213 01:18:48.019573 2627 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-host-proc-sys-kernel\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019996 kubelet[2627]: I1213 01:18:48.019588 2627 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/97ece0ab-6dbb-496d-b3b2-98ee98939f74-hostproc\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.019996 kubelet[2627]: I1213 01:18:48.019604 2627 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4gb68\" (UniqueName: \"kubernetes.io/projected/97ece0ab-6dbb-496d-b3b2-98ee98939f74-kube-api-access-4gb68\") on node \"ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:18:48.168493 systemd[1]: Removed slice kubepods-burstable-pod97ece0ab_6dbb_496d_b3b2_98ee98939f74.slice - libcontainer container kubepods-burstable-pod97ece0ab_6dbb_496d_b3b2_98ee98939f74.slice. Dec 13 01:18:48.168685 systemd[1]: kubepods-burstable-pod97ece0ab_6dbb_496d_b3b2_98ee98939f74.slice: Consumed 9.684s CPU time. Dec 13 01:18:48.172304 systemd[1]: Removed slice kubepods-besteffort-pod27f2dc47_d491_48ca_ba91_1a9aba4abdd7.slice - libcontainer container kubepods-besteffort-pod27f2dc47_d491_48ca_ba91_1a9aba4abdd7.slice. Dec 13 01:18:48.510850 kubelet[2627]: I1213 01:18:48.510703 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f2dc47-d491-48ca-ba91-1a9aba4abdd7" path="/var/lib/kubelet/pods/27f2dc47-d491-48ca-ba91-1a9aba4abdd7/volumes" Dec 13 01:18:48.511538 kubelet[2627]: I1213 01:18:48.511490 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ece0ab-6dbb-496d-b3b2-98ee98939f74" path="/var/lib/kubelet/pods/97ece0ab-6dbb-496d-b3b2-98ee98939f74/volumes" Dec 13 01:18:48.554850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97-rootfs.mount: Deactivated successfully. Dec 13 01:18:48.555307 systemd[1]: var-lib-kubelet-pods-97ece0ab\x2d6dbb\x2d496d\x2db3b2\x2d98ee98939f74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4gb68.mount: Deactivated successfully. Dec 13 01:18:48.555562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b-rootfs.mount: Deactivated successfully. Dec 13 01:18:48.555692 systemd[1]: var-lib-kubelet-pods-27f2dc47\x2dd491\x2d48ca\x2dba91\x2d1a9aba4abdd7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqhzfn.mount: Deactivated successfully. Dec 13 01:18:48.555810 systemd[1]: var-lib-kubelet-pods-97ece0ab\x2d6dbb\x2d496d\x2db3b2\x2d98ee98939f74-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:18:48.555939 systemd[1]: var-lib-kubelet-pods-97ece0ab\x2d6dbb\x2d496d\x2db3b2\x2d98ee98939f74-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:18:49.516358 sshd[4229]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:49.521985 systemd[1]: sshd@25-10.128.0.51:22-147.75.109.163:49486.service: Deactivated successfully. Dec 13 01:18:49.524650 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:18:49.524983 systemd[1]: session-26.scope: Consumed 1.655s CPU time. Dec 13 01:18:49.527222 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:18:49.529221 systemd-logind[1445]: Removed session 26. Dec 13 01:18:49.572670 systemd[1]: Started sshd@26-10.128.0.51:22-147.75.109.163:34964.service - OpenSSH per-connection server daemon (147.75.109.163:34964). Dec 13 01:18:49.865243 sshd[4391]: Accepted publickey for core from 147.75.109.163 port 34964 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:49.867142 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:49.873128 systemd-logind[1445]: New session 27 of user core. Dec 13 01:18:49.883431 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:18:50.375462 ntpd[1428]: Deleting interface #12 lxc_health, fe80::3018:31ff:fecf:969b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=67 secs Dec 13 01:18:50.376120 ntpd[1428]: 13 Dec 01:18:50 ntpd[1428]: Deleting interface #12 lxc_health, fe80::3018:31ff:fecf:969b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=67 secs Dec 13 01:18:50.709326 kubelet[2627]: I1213 01:18:50.707652 2627 topology_manager.go:215] "Topology Admit Handler" podUID="cb45aaef-28e6-4321-b822-c03e74e9cbab" podNamespace="kube-system" podName="cilium-z7g6n" Dec 13 01:18:50.709326 kubelet[2627]: E1213 01:18:50.707749 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97ece0ab-6dbb-496d-b3b2-98ee98939f74" containerName="mount-bpf-fs" Dec 13 01:18:50.709326 kubelet[2627]: E1213 01:18:50.707767 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97ece0ab-6dbb-496d-b3b2-98ee98939f74" containerName="cilium-agent" Dec 13 01:18:50.709326 kubelet[2627]: E1213 01:18:50.707780 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27f2dc47-d491-48ca-ba91-1a9aba4abdd7" containerName="cilium-operator" Dec 13 01:18:50.709326 kubelet[2627]: E1213 01:18:50.707790 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97ece0ab-6dbb-496d-b3b2-98ee98939f74" containerName="apply-sysctl-overwrites" Dec 13 01:18:50.709326 kubelet[2627]: E1213 01:18:50.707801 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97ece0ab-6dbb-496d-b3b2-98ee98939f74" containerName="mount-cgroup" Dec 13 01:18:50.709326 kubelet[2627]: E1213 01:18:50.707811 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97ece0ab-6dbb-496d-b3b2-98ee98939f74" containerName="clean-cilium-state" Dec 13 01:18:50.709326 kubelet[2627]: I1213 01:18:50.707851 2627 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f2dc47-d491-48ca-ba91-1a9aba4abdd7" containerName="cilium-operator" Dec 13 01:18:50.709326 kubelet[2627]: I1213 01:18:50.707862 2627 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ece0ab-6dbb-496d-b3b2-98ee98939f74" containerName="cilium-agent" Dec 13 01:18:50.724496 sshd[4391]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:50.728859 systemd[1]: Created slice kubepods-burstable-podcb45aaef_28e6_4321_b822_c03e74e9cbab.slice - libcontainer container kubepods-burstable-podcb45aaef_28e6_4321_b822_c03e74e9cbab.slice. Dec 13 01:18:50.734808 systemd[1]: sshd@26-10.128.0.51:22-147.75.109.163:34964.service: Deactivated successfully. Dec 13 01:18:50.740265 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:18:50.745034 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:18:50.747714 systemd-logind[1445]: Removed session 27. Dec 13 01:18:50.790360 systemd[1]: Started sshd@27-10.128.0.51:22-147.75.109.163:34972.service - OpenSSH per-connection server daemon (147.75.109.163:34972). Dec 13 01:18:50.840795 kubelet[2627]: I1213 01:18:50.840725 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-cilium-cgroup\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.840795 kubelet[2627]: I1213 01:18:50.840785 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5g98\" (UniqueName: \"kubernetes.io/projected/cb45aaef-28e6-4321-b822-c03e74e9cbab-kube-api-access-w5g98\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841051 kubelet[2627]: I1213 01:18:50.840819 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-cilium-run\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841051 kubelet[2627]: I1213 01:18:50.840846 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cb45aaef-28e6-4321-b822-c03e74e9cbab-cilium-ipsec-secrets\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841051 kubelet[2627]: I1213 01:18:50.840872 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb45aaef-28e6-4321-b822-c03e74e9cbab-clustermesh-secrets\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841051 kubelet[2627]: I1213 01:18:50.840897 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-etc-cni-netd\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841051 kubelet[2627]: I1213 01:18:50.840945 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb45aaef-28e6-4321-b822-c03e74e9cbab-cilium-config-path\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841340 kubelet[2627]: I1213 01:18:50.840973 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-host-proc-sys-net\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841340 kubelet[2627]: I1213 01:18:50.840997 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-xtables-lock\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841340 kubelet[2627]: I1213 01:18:50.841026 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-host-proc-sys-kernel\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841340 kubelet[2627]: I1213 01:18:50.841057 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-cni-path\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841340 kubelet[2627]: I1213 01:18:50.841084 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-lib-modules\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841340 kubelet[2627]: I1213 01:18:50.841111 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb45aaef-28e6-4321-b822-c03e74e9cbab-hubble-tls\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841604 kubelet[2627]: I1213 01:18:50.841146 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-bpf-maps\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:50.841604 kubelet[2627]: I1213 01:18:50.841176 2627 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb45aaef-28e6-4321-b822-c03e74e9cbab-hostproc\") pod \"cilium-z7g6n\" (UID: \"cb45aaef-28e6-4321-b822-c03e74e9cbab\") " pod="kube-system/cilium-z7g6n" Dec 13 01:18:51.039142 containerd[1459]: time="2024-12-13T01:18:51.038999340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7g6n,Uid:cb45aaef-28e6-4321-b822-c03e74e9cbab,Namespace:kube-system,Attempt:0,}" Dec 13 01:18:51.080464 containerd[1459]: time="2024-12-13T01:18:51.080060047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:18:51.080464 containerd[1459]: time="2024-12-13T01:18:51.080126645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:18:51.080464 containerd[1459]: time="2024-12-13T01:18:51.080146138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:51.080464 containerd[1459]: time="2024-12-13T01:18:51.080285130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:18:51.110456 systemd[1]: Started cri-containerd-277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99.scope - libcontainer container 277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99. Dec 13 01:18:51.111468 sshd[4402]: Accepted publickey for core from 147.75.109.163 port 34972 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:51.114418 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:51.124244 systemd-logind[1445]: New session 28 of user core. Dec 13 01:18:51.129478 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:18:51.155744 containerd[1459]: time="2024-12-13T01:18:51.155635859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7g6n,Uid:cb45aaef-28e6-4321-b822-c03e74e9cbab,Namespace:kube-system,Attempt:0,} returns sandbox id \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\"" Dec 13 01:18:51.159490 containerd[1459]: time="2024-12-13T01:18:51.159428801Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:18:51.178012 containerd[1459]: time="2024-12-13T01:18:51.177942230Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ffd72042717cf370ce93d373f2df2933fb640f524edc42cc53ef5a67aab486f\"" Dec 13 01:18:51.178804 containerd[1459]: time="2024-12-13T01:18:51.178712359Z" level=info msg="StartContainer for \"8ffd72042717cf370ce93d373f2df2933fb640f524edc42cc53ef5a67aab486f\"" Dec 13 01:18:51.219885 systemd[1]: Started cri-containerd-8ffd72042717cf370ce93d373f2df2933fb640f524edc42cc53ef5a67aab486f.scope - libcontainer container 8ffd72042717cf370ce93d373f2df2933fb640f524edc42cc53ef5a67aab486f. Dec 13 01:18:51.261882 containerd[1459]: time="2024-12-13T01:18:51.261710076Z" level=info msg="StartContainer for \"8ffd72042717cf370ce93d373f2df2933fb640f524edc42cc53ef5a67aab486f\" returns successfully" Dec 13 01:18:51.275278 systemd[1]: cri-containerd-8ffd72042717cf370ce93d373f2df2933fb640f524edc42cc53ef5a67aab486f.scope: Deactivated successfully. Dec 13 01:18:51.321404 containerd[1459]: time="2024-12-13T01:18:51.321316371Z" level=info msg="shim disconnected" id=8ffd72042717cf370ce93d373f2df2933fb640f524edc42cc53ef5a67aab486f namespace=k8s.io Dec 13 01:18:51.321828 containerd[1459]: time="2024-12-13T01:18:51.321396491Z" level=warning msg="cleaning up after shim disconnected" id=8ffd72042717cf370ce93d373f2df2933fb640f524edc42cc53ef5a67aab486f namespace=k8s.io Dec 13 01:18:51.321828 containerd[1459]: time="2024-12-13T01:18:51.321434675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:51.329513 sshd[4402]: pam_unix(sshd:session): session closed for user core Dec 13 01:18:51.338008 systemd[1]: sshd@27-10.128.0.51:22-147.75.109.163:34972.service: Deactivated successfully. Dec 13 01:18:51.343036 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:18:51.347334 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:18:51.349803 systemd-logind[1445]: Removed session 28. Dec 13 01:18:51.387665 systemd[1]: Started sshd@28-10.128.0.51:22-147.75.109.163:34988.service - OpenSSH per-connection server daemon (147.75.109.163:34988). Dec 13 01:18:51.677025 sshd[4515]: Accepted publickey for core from 147.75.109.163 port 34988 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:18:51.679440 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:18:51.687434 systemd-logind[1445]: New session 29 of user core. Dec 13 01:18:51.692536 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:18:51.896328 containerd[1459]: time="2024-12-13T01:18:51.895705387Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:18:51.919427 containerd[1459]: time="2024-12-13T01:18:51.919273766Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4\"" Dec 13 01:18:51.921372 containerd[1459]: time="2024-12-13T01:18:51.921150039Z" level=info msg="StartContainer for \"c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4\"" Dec 13 01:18:52.001512 systemd[1]: Started cri-containerd-c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4.scope - libcontainer container c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4. Dec 13 01:18:52.037707 containerd[1459]: time="2024-12-13T01:18:52.037641770Z" level=info msg="StartContainer for \"c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4\" returns successfully" Dec 13 01:18:52.047875 systemd[1]: cri-containerd-c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4.scope: Deactivated successfully. Dec 13 01:18:52.082459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4-rootfs.mount: Deactivated successfully. Dec 13 01:18:52.085807 containerd[1459]: time="2024-12-13T01:18:52.085704182Z" level=info msg="shim disconnected" id=c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4 namespace=k8s.io Dec 13 01:18:52.085807 containerd[1459]: time="2024-12-13T01:18:52.085794166Z" level=warning msg="cleaning up after shim disconnected" id=c3edb04877725aa5f61aab30ac7c4cee5c0db6180f74320d26163915302928c4 namespace=k8s.io Dec 13 01:18:52.085807 containerd[1459]: time="2024-12-13T01:18:52.085809292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:52.105540 containerd[1459]: time="2024-12-13T01:18:52.105461847Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:18:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:18:52.697187 kubelet[2627]: E1213 01:18:52.697095 2627 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:18:52.894417 containerd[1459]: time="2024-12-13T01:18:52.894252042Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:18:52.920178 containerd[1459]: time="2024-12-13T01:18:52.920119105Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a\"" Dec 13 01:18:52.923251 containerd[1459]: time="2024-12-13T01:18:52.921538274Z" level=info msg="StartContainer for \"f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a\"" Dec 13 01:18:52.972494 systemd[1]: Started cri-containerd-f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a.scope - libcontainer container f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a. Dec 13 01:18:53.015638 containerd[1459]: time="2024-12-13T01:18:53.015568169Z" level=info msg="StartContainer for \"f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a\" returns successfully" Dec 13 01:18:53.019625 systemd[1]: cri-containerd-f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a.scope: Deactivated successfully. Dec 13 01:18:53.064430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a-rootfs.mount: Deactivated successfully. Dec 13 01:18:53.069856 containerd[1459]: time="2024-12-13T01:18:53.069776139Z" level=info msg="shim disconnected" id=f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a namespace=k8s.io Dec 13 01:18:53.070521 containerd[1459]: time="2024-12-13T01:18:53.070468807Z" level=warning msg="cleaning up after shim disconnected" id=f18d710222a94235e2dea0f26f8b78c1e14d4f1683f80990d0d43d5c1d5c1f2a namespace=k8s.io Dec 13 01:18:53.070882 containerd[1459]: time="2024-12-13T01:18:53.070665234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:53.089252 containerd[1459]: time="2024-12-13T01:18:53.089171395Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:18:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:18:53.899192 containerd[1459]: time="2024-12-13T01:18:53.899122756Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:18:53.917081 containerd[1459]: time="2024-12-13T01:18:53.916977797Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5\"" Dec 13 01:18:53.923443 containerd[1459]: time="2024-12-13T01:18:53.923394812Z" level=info msg="StartContainer for \"f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5\"" Dec 13 01:18:53.984504 systemd[1]: Started cri-containerd-f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5.scope - libcontainer container f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5. Dec 13 01:18:54.035660 systemd[1]: cri-containerd-f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5.scope: Deactivated successfully. Dec 13 01:18:54.038862 containerd[1459]: time="2024-12-13T01:18:54.038550530Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb45aaef_28e6_4321_b822_c03e74e9cbab.slice/cri-containerd-f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5.scope/memory.events\": no such file or directory" Dec 13 01:18:54.041780 containerd[1459]: time="2024-12-13T01:18:54.041637355Z" level=info msg="StartContainer for \"f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5\" returns successfully" Dec 13 01:18:54.077762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5-rootfs.mount: Deactivated successfully. Dec 13 01:18:54.082991 containerd[1459]: time="2024-12-13T01:18:54.082913458Z" level=info msg="shim disconnected" id=f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5 namespace=k8s.io Dec 13 01:18:54.082991 containerd[1459]: time="2024-12-13T01:18:54.082992153Z" level=warning msg="cleaning up after shim disconnected" id=f8ab0d03e3f0409b5dff567cd2d739f6f9515cfc044c6788da18da5a75c5f8d5 namespace=k8s.io Dec 13 01:18:54.083329 containerd[1459]: time="2024-12-13T01:18:54.083006355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:18:54.759949 kubelet[2627]: I1213 01:18:54.759872 2627 setters.go:580] "Node became not ready" node="ci-4081-2-1-4873a8059e1999158086.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:18:54Z","lastTransitionTime":"2024-12-13T01:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:18:54.910998 containerd[1459]: time="2024-12-13T01:18:54.909687591Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:18:54.938290 containerd[1459]: time="2024-12-13T01:18:54.938145590Z" level=info msg="CreateContainer within sandbox \"277a878a97a0e8f1833f4a740a8c2233151bb656ac50944b9f877b9f92ef4c99\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2dfc68dbfceeebda2503cfb547257e98a5b7428b6dd6bd8849d52810265eba5f\"" Dec 13 01:18:54.939154 containerd[1459]: time="2024-12-13T01:18:54.939096693Z" level=info msg="StartContainer for \"2dfc68dbfceeebda2503cfb547257e98a5b7428b6dd6bd8849d52810265eba5f\"" Dec 13 01:18:54.987439 systemd[1]: Started cri-containerd-2dfc68dbfceeebda2503cfb547257e98a5b7428b6dd6bd8849d52810265eba5f.scope - libcontainer container 2dfc68dbfceeebda2503cfb547257e98a5b7428b6dd6bd8849d52810265eba5f. Dec 13 01:18:55.030594 containerd[1459]: time="2024-12-13T01:18:55.029855211Z" level=info msg="StartContainer for \"2dfc68dbfceeebda2503cfb547257e98a5b7428b6dd6bd8849d52810265eba5f\" returns successfully" Dec 13 01:18:55.512261 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:18:58.761395 systemd-networkd[1371]: lxc_health: Link UP Dec 13 01:18:58.772409 systemd-networkd[1371]: lxc_health: Gained carrier Dec 13 01:18:59.076955 kubelet[2627]: I1213 01:18:59.076086 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z7g6n" podStartSLOduration=9.07605548 podStartE2EDuration="9.07605548s" podCreationTimestamp="2024-12-13 01:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:18:55.934624425 +0000 UTC m=+113.582360358" watchObservedRunningTime="2024-12-13 01:18:59.07605548 +0000 UTC m=+116.723791410" Dec 13 01:19:00.099456 systemd-networkd[1371]: lxc_health: Gained IPv6LL Dec 13 01:19:00.574674 systemd[1]: run-containerd-runc-k8s.io-2dfc68dbfceeebda2503cfb547257e98a5b7428b6dd6bd8849d52810265eba5f-runc.EifQIE.mount: Deactivated successfully. Dec 13 01:19:02.375586 ntpd[1428]: Listen normally on 15 lxc_health [fe80::c8dc:5bff:fe6e:9351%14]:123 Dec 13 01:19:02.376320 ntpd[1428]: 13 Dec 01:19:02 ntpd[1428]: Listen normally on 15 lxc_health [fe80::c8dc:5bff:fe6e:9351%14]:123 Dec 13 01:19:02.528927 containerd[1459]: time="2024-12-13T01:19:02.528851477Z" level=info msg="StopPodSandbox for \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\"" Dec 13 01:19:02.529528 containerd[1459]: time="2024-12-13T01:19:02.529005225Z" level=info msg="TearDown network for sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" successfully" Dec 13 01:19:02.529528 containerd[1459]: time="2024-12-13T01:19:02.529025917Z" level=info msg="StopPodSandbox for \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" returns successfully" Dec 13 01:19:02.531271 containerd[1459]: time="2024-12-13T01:19:02.529716660Z" level=info msg="RemovePodSandbox for \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\"" Dec 13 01:19:02.531271 containerd[1459]: time="2024-12-13T01:19:02.529757550Z" level=info msg="Forcibly stopping sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\"" Dec 13 01:19:02.531271 containerd[1459]: time="2024-12-13T01:19:02.529837902Z" level=info msg="TearDown network for sandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" successfully" Dec 13 01:19:02.535367 containerd[1459]: time="2024-12-13T01:19:02.535317999Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:19:02.535484 containerd[1459]: time="2024-12-13T01:19:02.535448813Z" level=info msg="RemovePodSandbox \"939c79067c48d685862e62cacf735203c0da063070e478e2ebf4a246e7f5ec97\" returns successfully" Dec 13 01:19:02.536423 containerd[1459]: time="2024-12-13T01:19:02.536386224Z" level=info msg="StopPodSandbox for \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\"" Dec 13 01:19:02.536522 containerd[1459]: time="2024-12-13T01:19:02.536496590Z" level=info msg="TearDown network for sandbox \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\" successfully" Dec 13 01:19:02.536604 containerd[1459]: time="2024-12-13T01:19:02.536524615Z" level=info msg="StopPodSandbox for \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\" returns successfully" Dec 13 01:19:02.537244 containerd[1459]: time="2024-12-13T01:19:02.537188091Z" level=info msg="RemovePodSandbox for \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\"" Dec 13 01:19:02.537244 containerd[1459]: time="2024-12-13T01:19:02.537242132Z" level=info msg="Forcibly stopping sandbox \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\"" Dec 13 01:19:02.537395 containerd[1459]: time="2024-12-13T01:19:02.537317829Z" level=info msg="TearDown network for sandbox \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\" successfully" Dec 13 01:19:02.542844 containerd[1459]: time="2024-12-13T01:19:02.541882053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:19:02.542844 containerd[1459]: time="2024-12-13T01:19:02.541949667Z" level=info msg="RemovePodSandbox \"96dbead1b310806ff3ba632f06e68652b48cab9639e75ec6dcf5e7446513609b\" returns successfully" Dec 13 01:19:02.855911 systemd[1]: run-containerd-runc-k8s.io-2dfc68dbfceeebda2503cfb547257e98a5b7428b6dd6bd8849d52810265eba5f-runc.gZG4I7.mount: Deactivated successfully. Dec 13 01:19:05.213376 sshd[4515]: pam_unix(sshd:session): session closed for user core Dec 13 01:19:05.219467 systemd[1]: sshd@28-10.128.0.51:22-147.75.109.163:34988.service: Deactivated successfully. Dec 13 01:19:05.222566 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:19:05.223706 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:19:05.225241 systemd-logind[1445]: Removed session 29.